1 - Bare metal configuration

Full EKS Anywhere configuration reference for a Bare Metal cluster.

This is a generic template with detailed descriptions below for reference. The following additional optional configuration can also be included:

To generate your own cluster configuration, follow instructions from the Bare Metal Create production cluster section and modify it using descriptions below. For information on how to add cluster configuration settings to this file for advanced node configuration, see Advanced Bare Metal cluster configuration .

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
spec:
  clusterNetwork:
    cniConfig:
      cilium: {}
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12
  controlPlaneConfiguration:              
    count: 1
    endpoint:
      host: "<Control Plane Endpoint IP>"
    machineGroupRef:
      kind: TinkerbellMachineConfig
      name: my-cluster-name-cp
  datacenterRef:
    kind: TinkerbellDatacenterConfig
    name: my-cluster-name
  kubernetesVersion: "1.25"
  managementCluster:
    name: my-cluster-name
  workerNodeGroupConfigurations:
  - count: 1
    machineGroupRef:
      kind: TinkerbellMachineConfig
      name: my-cluster-name
    name: md-0

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: TinkerbellDatacenterConfig
metadata:
  name: my-cluster-name
spec:
  tinkerbellIP: "<Tinkerbell IP>"

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: TinkerbellMachineConfig
metadata:
  name: my-cluster-name-cp
spec:
  hardwareSelector: {}
  osFamily: bottlerocket
  templateRef: {}
  users:
  - name: ec2-user
    sshAuthorizedKeys:
    - ssh-rsa AAAAB3NzaC1yc2... jwjones@833efcab1482.home.example.com

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: TinkerbellMachineConfig
metadata:
  name: my-cluster-name
spec:
  hardwareSelector: {}
  osFamily: bottlerocket
  templateRef:
    kind: TinkerbellTemplateConfig
    name: my-cluster-name
  users:
  - name: ec2-user
    sshAuthorizedKeys:
    - ssh-rsa AAAAB3NzaC1yc2... jwjones@833efcab1482.home.example.com

Cluster Fields

name (required)

Name of your cluster (my-cluster-name in this example).

clusterNetwork (required)

Specific network configuration for your Kubernetes cluster.

clusterNetwork.cniConfig (required)

CNI plugin to be installed in the cluster. The only supported value at the moment is cilium.

clusterNetwork.pods.cidrBlocks[0] (required)

Subnet used by pods in CIDR notation. Please note that only 1 custom pods CIDR block specification is permitted. This CIDR block should not conflict with the clusterNetwork.services.cidrBlocks and network subnet range selected for the machines.

clusterNetwork.services.cidrBlocks[0] (required)

Subnet used by services in CIDR notation. Please note that only 1 custom services CIDR block specification is permitted. This CIDR block should not conflict with the clusterNetwork.pods.cidrBlocks and network subnet range selected for the machines.

clusterNetwork.dns.resolvConf.path (optional)

Path to the file with a custom DNS resolver configuration.

controlPlaneConfiguration (required)

Specific control plane configuration for your Kubernetes cluster.

controlPlaneConfiguration.count (required)

Number of control plane nodes. This number needs to be odd to maintain ETCD quorum.

controlPlaneConfiguration.endpoint.host (required)

A unique IP you want to use for the control plane in your EKS Anywhere cluster. Choose an IP in your network range that does not conflict with other machines.

NOTE: This IP should be outside the network DHCP range as it is a floating IP that gets assigned to one of the control plane nodes for kube-apiserver loadbalancing.

controlPlaneConfiguration.machineGroupRef (required)

Refers to the Kubernetes object with Tinkerbell-specific configuration for your nodes. See TinkerbellMachineConfig Fields below.

controlPlaneConfiguration.taints

A list of taints to apply to the control plane nodes of the cluster.

Replaces the default control plane taint (For k8s versions prior to 1.24, node-role.kubernetes.io/master. For k8s versions 1.24+, node-role.kubernetes.io/control-plane). The default control plane components will tolerate the provided taints.

Modifying the taints associated with the control plane configuration will cause new nodes to be rolled-out, replacing the existing nodes.

NOTE: The taints provided will be used instead of the default control plane taint. Any pods that you run on the control plane nodes must tolerate the taints you provide in the control plane configuration.

controlPlaneConfiguration.labels

A list of labels to apply to the control plane nodes of the cluster. This is in addition to the labels that EKS Anywhere will add by default.

Modifying the labels associated with the control plane configuration will cause new nodes to be rolled out, replacing the existing nodes.

datacenterRef

Refers to the Kubernetes object with Tinkerbell-specific configuration. See TinkerbellDatacenterConfig Fields below.

kubernetesVersion (required)

The Kubernetes version you want to use for your cluster. Supported values: 1.25, 1.24, 1.23, 1.22, 1.21

managementCluster

Identifies the name of the management cluster. If this is a standalone cluster or if it were serving as the management cluster for other workload clusters, this will be the same as the cluster name. Bare Metal EKS Anywhere clusters do not yet support the creation of separate workload clusters.

workerNodeGroupConfigurations

This takes in a list of node groups that you can define for your workers.

You can omit workerNodeGroupConfigurations when creating Bare Metal clusters. In this case, control plane nodes will not be tainted and all pods will run on the control plane nodes. This mechanism can be used to deploy Bare Metal clusters on a single server.

NOTE: Empty workerNodeGroupConfigurations is not supported when Kubernetes version <= 1.21.

workerNodeGroupConfigurations.count

Number of worker nodes. Optional if autoscalingConfiguration is used, in which case count will default to autoscalingConfiguration.minCount.

workerNodeGroupConfigurations.machineGroupRef (required)

Refers to the Kubernetes object with Tinkerbell-specific configuration for your nodes. See TinkerbellMachineConfig Fields below.

workerNodeGroupConfigurations.name (required)

Name of the worker node group (default: md-0)

workerNodeGroupConfigurations.autoscalingConfiguration.minCount

Minimum number of nodes for this node group’s autoscaling configuration.

workerNodeGroupConfigurations.autoscalingConfiguration.maxCount

Maximum number of nodes for this node group’s autoscaling configuration.

workerNodeGroupConfigurations.taints

A list of taints to apply to the nodes in the worker node group.

Modifying the taints associated with a worker node group configuration will cause new nodes to be rolled-out, replacing the existing nodes associated with the configuration.

At least one node group must not have NoSchedule or NoExecute taints applied to it.

workerNodeGroupConfigurations.labels

A list of labels to apply to the nodes in the worker node group. This is in addition to the labels that EKS Anywhere will add by default.

Modifying the labels associated with a worker node group configuration will cause new nodes to be rolled out, replacing the existing nodes associated with the configuration.

TinkerbellDatacenterConfig Fields

tinkerbellIP

Required field to identify the IP address of the Tinkerbell service. This IP address must be a unique IP in the network range that does not conflict with other IPs. Once the Tinkerbell services move from the Admin machine to run on the target cluster, this IP address makes it possible for the stack to be used for future provisioning needs. When separate management and workload clusters are supported in Bare Metal, the IP address becomes a necessity.

osImageURL

Optional field to replace the default Bottlerocket operating system. EKS Anywhere can only auto-import Bottlerocket. In order to use Ubuntu or Redhat see building baremetal node images to learn more on building and using Ubuntu with an EKS Anywhere cluster. This field is also useful if you want to provide a customized operating system image or simply host the standard image locally.

hookImagesURLPath

Optional field to replace the HookOS image. This field is useful if you want to provide a customized HookOS image or simply host the standard image locally. See Artifacts for details.

Example TinkerbellDatacenterConfig.spec

spec:
  tinkerbellIP: "192.168.0.10"                                          # Available, routable IP
  osImageURL: "http://my-web-server/ubuntu-v1.23.7-eks-a-12-amd64.gz"   # Full URL to the OS Image hosted locally
  hookImagesURLPath: "http://my-web-server/hook"                        # Path to the hook images. This path must contain vmlinuz-x86_64 and initramfs-x86_64 

This is the folder structure for my-web-server:

my-web-server
├── hook
│   ├── initramfs-x86_64
│   └── vmlinuz-x86_64
└── ubuntu-v1.23.7-eks-a-12-amd64.gz

skipLoadBalancerDeployment

Optional field to skip deploying the default load balancer for Tinkerbell stack.

EKS Anywhere for Bare Metal uses kube-vip load balancer by default to expose the Tinkerbell stack externally. You can disable this feature by setting this field to true.

NOTE: If you skip load balancer deployment, you will have to ensure that the Tinkerbell stack is available at tinkerbellIP once the cluster creation is finished. One way to achieve this is by using the MetalLB package.

TinkerbellMachineConfig Fields

In the example, there are TinkerbellMachineConfig sections for control plane (my-cluster-name-cp) and worker (my-cluster-name) machine groups. The following fields identify information needed to configure the nodes in each of those groups.

NOTE: Currently, you can only have one machine group for all machines in the control plane, although you can have multiple machine groups for the workers.

hardwareSelector

Use fields under hardwareSelector to add key/value pair labels to match particular machines that you identified in the CSV file where you defined the machines in your cluster. Choose any label name you like. For example, if you had added the label node=cp-machine to the machines listed in your CSV file that you want to be control plane nodes, the following hardwareSelector field would cause those machines to be added to the control plane:

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: TinkerbellMachineConfig
metadata:
  name: my-cluster-name-cp
spec:
  hardwareSelector:
    node: "cp-machine"

osFamily (required)

Operating system on the machine. For example, bottlerocket or ubuntu.

templateRef (optional)

Identifies the template that defines the actions that will be applied to the TinkerbellMachineConfig. See TinkerbellTemplateConfig fields below. EKS Anywhere will generate default templates based on osFamily during the create command. You can override this default template by providing your own template here.

users

The name of the user you want to configure to access your virtual machines through SSH.

The default is ec2-user. Currently, only one user is supported.

users[0].sshAuthorizedKeys (optional)

The SSH public keys you want to configure to access your machines through SSH (as described below). Only 1 is supported at this time.

users[0].sshAuthorizedKeys[0] (optional)

This is the SSH public key that will be placed in authorized_keys on all EKS Anywhere cluster machines so you can SSH into them. The user will be what is defined under name above. For example:

ssh -i <private-key-file> <user>@<machine-IP>

The default is generating a key in your $(pwd)/<cluster-name> folder when not specifying a value.

Advanced Bare Metal cluster configuration

When you generate a Bare Metal cluster configuration, the TinkerbellTemplateConfig is kept internally and not shown in the generated configuration file. TinkerbellTemplateConfig settings define the actions done to install each node, such as get installation media, configure networking, add users, and otherwise configure the node.

Advanced users can override the default values set for TinkerbellTemplateConfig. They can also add their own Tinkerbell actions to make personalized modifications to EKS Anywhere nodes.

The following shows two TinkerbellTemplateConfig examples that you can add to your cluster configuration file to override the values that EKS Anywhere sets: one for Ubuntu and one for Bottlerocket. Most actions used differ for different operating systems.

NOTE: For the stream-image action, DEST_DISK points to the device representing the entire hard disk (for example, /dev/sda). For UEFI-enabled images, such as Ubuntu, write actions use DEST_DISK to point to the second partition (for example, /dev/sda2), with the first being the EFI partition. For the Bottlerocket image, which has 12 partitions, DEST_DISK is partition 12 (for example, /dev/sda12). Device names will be different for different disk types.

Ubuntu TinkerbellTemplateConfig example

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: TinkerbellTemplateConfig
metadata:
  name: my-cluster-name
spec:
  template:
    global_timeout: 6000
    id: ""
    name: my-cluster-name
    tasks:
    - actions:
      - environment:
          COMPRESSED: "true"
          DEST_DISK: /dev/sda
          IMG_URL: https://my-file-server/ubuntu-v1.23.7-eks-a-12-amd64.gz
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/image2disk:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: stream-image
        timeout: 360
      - environment:
          DEST_DISK: /dev/sda2
          DEST_PATH: /etc/netplan/config.yaml
          STATIC_NETPLAN: true
          DIRMODE: "0755"
          FS_TYPE: ext4
          GID: "0"
          MODE: "0644"
          UID: "0"
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: write-netplan
        timeout: 90
      - environment:
          CONTENTS: |
            datasource:
              Ec2:
                metadata_urls: [<admin-machine-ip>, <tinkerbell-ip-from-cluster-config>]
                strict_id: false
            manage_etc_hosts: localhost
            warnings:
              dsid_missing_source: off            
          DEST_DISK: /dev/sda2
          DEST_PATH: /etc/cloud/cloud.cfg.d/10_tinkerbell.cfg
          DIRMODE: "0700"
          FS_TYPE: ext4
          GID: "0"
          MODE: "0600"
          UID: "0"
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: add-tink-cloud-init-config
        timeout: 90
      - environment:
          CONTENTS: |
            network:
              config: disabled            
          DEST_DISK: /dev/sda2
          DEST_PATH: /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg
          DIRMODE: "0700"
          FS_TYPE: ext4
          GID: "0"
          MODE: "0600"
          UID: "0"
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: disable-cloud-init-network-capabilities
        timeout: 90
      - environment:
          CONTENTS: | 
            datasource: Ec2
          DEST_DISK: /dev/sda2
          DEST_PATH: /etc/cloud/ds-identify.cfg
          DIRMODE: "0700"
          FS_TYPE: ext4
          GID: "0"
          MODE: "0600"
          UID: "0"
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: add-tink-cloud-init-ds-config
        timeout: 90
      - environment:
          BLOCK_DEVICE: /dev/sda2
          FS_TYPE: ext4
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/kexec:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: kexec-image
        pid: host
        timeout: 90
      name: my-cluster-name
      volumes:
      - /dev:/dev
      - /dev/console:/dev/console
      - /lib/firmware:/lib/firmware:ro
      worker: '{{.device_1}}'
    version: "0.1"

Bottlerocket TinkerbellTemplateConfig example

Pay special attention to the BOOTCONFIG_CONTENTS environment section below if you wish to set up console redirection for the kernel and systemd. If you are only using a direct attached monitor as your primary display device, no additional configuration is needed here. However, if you need all boot output to be shown via a server’s serial console for example, extra configuration should be provided inside BOOTCONFIG_CONTENTS.

An empty kernel {} key is provided below in the example; inside this key is where you will specify your console devices. You may specify multiple comma delimited console devices in quotes to a console key as such: console = "tty0", "ttyS0,115200n8". The order of the devices is significant; systemd will output to the last device specified. The console key belongs inside the kernel key like so:

kernel {
    console = "tty0", "ttyS0,115200n8"
}

The above example will send all kernel output to both consoles, and systemd output to ttyS0. Additional information about serial console setup can be found in the Linux kernel documentation .

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: TinkerbellTemplateConfig
metadata:
  name: my-cluster-name
spec:
  template:
    global_timeout: 6000
    id: ""
    name: my-cluster-name
    tasks:
    - actions:
      - environment:
          COMPRESSED: "true"
          DEST_DISK: /dev/sda
          IMG_URL: https://anywhere-assets.eks.amazonaws.com/releases/bundles/11/artifacts/raw/1-22/bottlerocket-v1.22.10-eks-d-1-22-8-eks-a-11-amd64.img.gz
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/image2disk:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: stream-image
        timeout: 360
      - environment:
          # An example console declaration that will send all kernel output to both consoles, and systemd output to ttyS0.
          # kernel {
          #     console = "tty0", "ttyS0,115200n8"
          # }
          BOOTCONFIG_CONTENTS: |
                        kernel {}
          DEST_DISK: /dev/sda12
          DEST_PATH: /bootconfig.data
          DIRMODE: "0700"
          FS_TYPE: ext4
          GID: "0"
          MODE: "0644"
          UID: "0"
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: write-bootconfig
        timeout: 90
      - environment:
          CONTENTS: |
            # Version is required, it will change as we support
            # additional settings
            version = 1
            # "eno1" is the interface name
            # Users may turn on dhcp4 and dhcp6 via boolean
            [eno1]
            dhcp4 = true
            # Define this interface as the "primary" interface
            # for the system.  This IP is what kubelet will use
            # as the node IP.  If none of the interfaces has
            # "primary" set, we choose the first interface in
            # the file
            primary = true            
          DEST_DISK: /dev/sda12
          DEST_PATH: /net.toml
          DIRMODE: "0700"
          FS_TYPE: ext4
          GID: "0"
          MODE: "0644"
          UID: "0"
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: write-netconfig
        timeout: 90
      - environment:
          HEGEL_URL: http://<hegel-ip>:50061
          DEST_DISK: /dev/sda12
          DEST_PATH: /user-data.toml
          DIRMODE: "0700"
          FS_TYPE: ext4
          GID: "0"
          MODE: "0644"
          UID: "0"
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/writefile:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: write-user-data
        timeout: 90
      - name: "reboot"
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/reboot:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        timeout: 90
        volumes:
          - /worker:/worker
      name: my-cluster-name
      volumes:
      - /dev:/dev
      - /dev/console:/dev/console
      - /lib/firmware:/lib/firmware:ro
      worker: '{{.device_1}}'
    version: "0.1"

TinkerbellTemplateConfig Fields

The values in the TinkerbellTemplateConfig fields are created from the contents of the CSV file used to generate a configuration. The template contains actions that are performed on a Bare Metal machine when it first boots up to be provisioned. For advanced users, you can add these fields to your cluster configuration file if you have special needs to do so.

While there are fields that apply to all provisioned operating systems, actions are specific to each operating system. Examples below describe actions for Ubuntu and Bottlerocket operating systems.

template.global_timeout

Sets the timeout value for completing the configuration. Set to 6000 (100 minutes) by default.

template.id

Not set by default.

template.tasks

Within the TinkerbellTemplateConfig template under tasks is a set of actions. The following descriptions cover the actions shown in the example templates for Ubuntu and Bottlerocket:

template.tasks.actions.name.stream-image (Ubuntu and Bottlerocket)

The stream-image action streams the selected image to the machine you are provisioning. It identifies:

  • environment.COMPRESSED: When set to true, Tinkerbell expects IMG_URL to be a compressed image, which Tinkerbell will uncompress when it writes the contents to disk.
  • environment.DEST_DISK: The hard disk on which the operating system is deployed. The default is the first SCSI disk (/dev/sda), but can be changed for other disk types.
  • environment.IMG_URL: The operating system tarball (ubuntu or other) to stream to the machine you are configuring.
  • image: Container image needed to perform the steps needed by this action.
  • timeout: Sets the amount of time (in seconds) that Tinkerbell has to stream the image, uncompress it, and write it to disk before timing out. Consider increasing this limit from the default 600 to a higher limit if this action is timing out.

Ubuntu-specific actions

template.tasks.actions.name.write-netplan (Ubuntu)

The write-netplan action writes Ubuntu network configuration information to the machine (see Netplan ) for details. It identifies:

  • environment.CONTENTS.network.version: Identifies the network version.
  • environment.CONTENTS.network.renderer: Defines the service to manage networking. By default, the networkd systemd service is used.
  • environment.CONTENTS.network.ethernets: Network interface to external network (eno1, by default) and whether or not to use dhcp4 (true, by default).
  • environment.DEST_DISK: Destination block storage device partition where the operating system is copied. By default, /dev/sda2 is used (sda1 is the EFI partition).
  • environment.DEST_PATH: File where the networking configuration is written (/etc/netplan/config.yaml, by default).
  • environment.DIRMODE: Linux directory permissions bits to use when creating directories (0755, by default)
  • environment.FS_TYPE: Type of filesystem on the partition (ext4, by default).
  • environment.GID: The Linux group ID to set on file. Set to 0 (root group) by default.
  • environment.MODE: The Linux permission bits to set on file (0644, by default).
  • environment.UID: The Linux user ID to set on file. Set to 0 (root user) by default.
  • image: Container image used to perform the steps needed by this action.
  • timeout: Time needed to complete the action, in seconds.

template.tasks.actions.add-tink-cloud-init-config (Ubuntu)

The add-tink-cloud-init-config action configures cloud-init features to further configure the operating system. See cloud-init Documentation for details. It identifies:

  • environment.CONTENTS.datasource: Identifies Ec2 (Ec2.metadata_urls) as the data source and sets Ec2.strict_id: false to prevent cloud-init from producing warnings about this datasource.
  • environment.CONTENTS.system_info: Creates the tink user and gives it administrative group privileges (wheel, adm) and passwordless sudo privileges, and set the default shell (/bin/bash).
  • environment.CONTENTS.manage_etc_hosts: Updates the system’s /etc/hosts file with the hostname. Set to localhost by default.
  • environment.CONTENTS.warnings: Sets dsid_missing_source to off.
  • environment.DEST_DISK: Destination block storage device partition where the operating system is located (/dev/sda2, by default).
  • environment.DEST_PATH: Location of the cloud-init configuration file on disk (/etc/cloud/cloud.cfg.d/10_tinkerbell.cfg, by default)
  • environment.DIRMODE: Linux directory permissions bits to use when creating directories (0700, by default)
  • environment.FS_TYPE: Type of filesystem on the partition (ext4, by default).
  • environment.GID: The Linux group ID to set on file. Set to 0 (root group) by default.
  • environment.MODE: The Linux permission bits to set on file (0600, by default).
  • environment.UID: The Linux user ID to set on file. Set to 0 (root user) by default.
  • image: Container image used to perform the steps needed by this action.
  • timeout: Time needed to complete the action, in seconds.

template.tasks.actions.add-tink-cloud-init-ds-config (Ubuntu)

The add-tink-cloud-init-ds-config action configures cloud-init data store features. This identifies the location of your metadata source once the machine is up and running. It identifies:

  • environment.CONTENTS.datasource: Sets the datasource. Uses Ec2, by default.
  • environment.DEST_DISK: Destination block storage device partition where the operating system is located (/dev/sda2, by default).
  • environment.DEST_PATH: Location of the data store identity configuration file on disk (/etc/cloud/ds-identify.cfg, by default)
  • environment.DIRMODE: Linux directory permissions bits to use when creating directories (0700, by default)
  • environment.FS_TYPE: Type of filesystem on the partition (ext4, by default).
  • environment.GID: The Linux group ID to set on file. Set to 0 (root group) by default.
  • environment.MODE: The Linux permission bits to set on file (0600, by default).
  • environment.UID: The Linux user ID to set on file. Set to 0 (root user) by default.
  • image: Container image used to perform the steps needed by this action.
  • timeout: Time needed to complete the action, in seconds.

template.tasks.actions.kexec-image (Ubuntu)

The kexec-image action performs provisioning activities on the machine, then allows kexec to pivot the kernel to use the system installed on disk. This action identifies:

  • environment.BLOCK_DEVICE: Disk partition on which the operating system is installed (/dev/sda2, by default)
  • environment.FS_TYPE: Type of filesystem on the partition (ext4, by default).
  • image: Container image used to perform the steps needed by this action.
  • pid: Process ID. Set to host, by default.
  • timeout: Time needed to complete the action, in seconds.
  • volumes: Identifies mount points that need to be remounted to point to locations in the installed system.

There are known issues related to drivers with some hardware that may make it necessary to replace the kexec-image action with a full reboot. If you require a full reboot, you can change the kexec-image setting as follows:

actions:
- name: "reboot"
  image: public.ecr.aws/l0g8r8j6/tinkerbell/hub/reboot-action:latest
  timeout: 90
  volumes:
  - /worker:/worker

Bottlerocket-specific actions

template.tasks.actions.write-bootconfig (Bottlerocket)

The write-bootconfig action identifies the location on the machine to put content needed to boot the system from disk.

  • environment.BOOTCONFIG_CONTENTS.kernel: Add kernel parameters that are passed to the kernel when the system boots.
  • environment.DEST_DISK: Identifies the block storage device that holds the boot partition.
  • environment.DEST_PATH: Identifies the file holding boot configuration data (/bootconfig.data in this example).
  • environment.DIRMODE: The Linux permissions assigned to the boot directory.
  • environment.FS_TYPE: The filesystem type associated with the boot partition.
  • environment.GID: The group ID associated with files and directories created on the boot partition.
  • environment.MODE: The Linux permissions assigned to files in the boot partition.
  • environment.UID: The user ID associated with files and directories created on the boot partition. UID 0 is the root user.
  • image: Container image used to perform the steps needed by this action.
  • timeout: Time needed to complete the action, in seconds.

template.tasks.actions.write-netconfig (Bottlerocket)

The write-netconfig action configures networking for the system.

  • environment.CONTENTS: Add network values, including: version = 1 (version number), [eno1] (external network interface), dhcp4 = true (turns on dhcp4), and primary = true (identifies this interface as the primary interface used by kubelet).
  • environment.DEST_DISK: Identifies the block storage device that holds the network configuration information.
  • environment.DEST_PATH: Identifies the file holding network configuration data (/net.toml in this example).
  • environment.DIRMODE: The Linux permissions assigned to the directory holding network configuration settings.
  • environment.FS_TYPE: The filesystem type associated with the partition holding network configuration settings.
  • environment.GID: The group ID associated with files and directories created on the partition. GID 0 is the root group.
  • environment.MODE: The Linux permissions assigned to files in the partition.
  • environment.UID: The user ID associated with files and directories created on the partition. UID 0 is the root user.
  • image: Container image used to perform the steps needed by this action.

template.tasks.actions.write-user-data (Bottlerocket)

The write-user-data action configures the Tinkerbell Hegel service, which provides the metadata store for Tinkerbell.

  • environment.HEGEL_URL: The IP address and port number of the Tinkerbell Hegel service.
  • environment.DEST_DISK: Identifies the block storage device that holds the network configuration information.
  • environment.DEST_PATH: Identifies the file holding network configuration data (/net.toml in this example).
  • environment.DIRMODE: The Linux permissions assigned to the directory holding network configuration settings.
  • environment.FS_TYPE: The filesystem type associated with the partition holding network configuration settings.
  • environment.GID: The group ID associated with files and directories created on the partition. GID 0 is the root group.
  • environment.MODE: The Linux permissions assigned to files in the partition.
  • environment.UID: The user ID associated with files and directories created on the partition. UID 0 is the root user.
  • image: Container image used to perform the steps needed by this action.
  • timeout: Time needed to complete the action, in seconds.

template.tasks.actions.reboot (Bottlerocket)

The reboot action defines how the system restarts to bring up the installed system.

  • image: Container image used to perform the steps needed by this action.
  • timeout: Time needed to complete the action, in seconds.
  • volumes: The volume (directory) to mount into the container from the installed system.

version

Matches the current version of the Tinkerbell template.

Custom Tinkerbell action examples

By creating your own custom Tinkerbell actions, you can add to or modify the installed operating system so those changes take effect when the installed system first starts (from a reboot or pivot). The following example shows how to add a .deb package (openssl) to an Ubuntu installation:

      - environment:
          BLOCK_DEVICE: /dev/sda1
          CHROOT: "y"
          CMD_LINE: apt -y update && apt -y install openssl
          DEFAULT_INTERPRETER: /bin/sh -c
          FS_TYPE: ext4
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/cexec:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: install-openssl
        timeout: 90

The following shows an example of adding a new user (tinkerbell) to an installed Ubuntu system:

      - environment:
          BLOCK_DEVICE: <block device path> # E.g. /dev/sda1
          FS_TYPE: ext4
          CHROOT: y
          DEFAULT_INTERPRETER: "/bin/sh -c"
          CMD_LINE: "useradd --password $(openssl passwd -1 tinkerbell) --shell /bin/bash --create-home --groups sudo tinkerbell"
        image: public.ecr.aws/eks-anywhere/tinkerbell/hub/cexec:6c0f0d437bde2c836d90b000312c8b25fa1b65e1-eks-a-15
        name: "create-user"
        timeout: 90

Look for more examples as they are added to the Tinkerbell examples page.

2 - Nutanix configuration

Full EKS Anywhere configuration reference for a Nutanix cluster.

This is a generic template with detailed descriptions below for reference.

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
 name: mgmt
 namespace: default
spec:
 bundlesRef:
   apiVersion: anywhere.eks.amazonaws.com/v1alpha1
   name: bundles-2
   namespace: eksa-system
 clusterNetwork:
   cniConfig:
     cilium: {}
   pods:
     cidrBlocks:
       - 192.168.0.0/16
   services:
     cidrBlocks:
       - 10.96.0.0/16
 controlPlaneConfiguration:
   count: 3
   endpoint:
     host: ""
   machineGroupRef:
     kind: NutanixMachineConfig
     name: mgmt-cp-machine
 datacenterRef:
   kind: NutanixDatacenterConfig
   name: nutanix-cluster
 kubernetesVersion: "1.25"
 workerNodeGroupConfigurations:
   - count: 1
     machineGroupRef:
       kind: NutanixMachineConfig
       name: mgmt-machine
     name: md-0
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: NutanixDatacenterConfig
metadata:
 name: nutanix-cluster
 namespace: default
spec:
 endpoint: pc01.cloud.internal
 port: 9440
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: NutanixMachineConfig
metadata:
 annotations:
   anywhere.eks.amazonaws.com/control-plane: "true"
 name: mgmt-cp-machine
 namespace: default
spec:
 cluster:
   name: nx-cluster-01
   type: name
 image:
   name: eksa-ubuntu-2004-kube-v1.25
   type: name
 memorySize: 4Gi
 osFamily: ubuntu
 subnet:
   name: vm-network
   type: name
 systemDiskSize: 40Gi
 project:
   type: name
   name: my-project
 users:
   - name: eksa
     sshAuthorizedKeys:
       - ssh-rsa AAAA…
 vcpuSockets: 2
 vcpusPerSocket: 1
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: NutanixMachineConfig
metadata:
 name: mgmt-machine
 namespace: default
spec:
 cluster:
   name: nx-cluster-01
   type: name
 image:
   name: eksa-ubuntu-2004-kube-v1.25
   type: name
 memorySize: 4Gi
 osFamily: ubuntu
 subnet:
   name: vm-network
   type: name
 systemDiskSize: 40Gi
 project:
   type: name
   name: my-project
 users:
   - name: eksa
     sshAuthorizedKeys:
       - ssh-rsa AAAA…
 vcpuSockets: 2
 vcpusPerSocket: 1
---

Cluster Fields

name (required)

Name of your cluster mgmt in this example.

clusterNetwork (required)

Specific network configuration for your Kubernetes cluster.

clusterNetwork.cniConfig (required)

CNI plugin configuration to be used in the cluster. The only supported configuration at the moment is cilium.

clusterNetwork.cniConfig.cilium.policyEnforcementMode

Optionally, you may specify a policyEnforcementMode of default, always, never.

clusterNetwork.pods.cidrBlocks[0] (required)

Subnet used by pods in CIDR notation. Please note that only 1 custom pods CIDR block specification is permitted. This CIDR block should not conflict with the network subnet range selected for the VMs.

clusterNetwork.services.cidrBlocks[0] (required)

Subnet used by services in CIDR notation. Please note that only 1 custom services CIDR block specification is permitted. This CIDR block should not conflict with the network subnet range selected for the VMs.

controlPlaneConfiguration (required)

Specific control plane configuration for your Kubernetes cluster.

controlPlaneConfiguration.count (required)

Number of control plane nodes

controlPlaneConfiguration.machineGroupRef (required)

Refers to the Kubernetes object with Nutanix specific configuration for your nodes. See NutanixMachineConfig fields below.

controlPlaneConfiguration.endpoint.host (required)

A unique IP you want to use for the control plane VM in your EKS Anywhere cluster. Choose an IP in your network range that does not conflict with other VMs.

NOTE: This IP should be outside the network DHCP range as it is a floating IP that gets assigned to one of the control plane nodes for kube-apiserver loadbalancing. Suggestions on how to ensure this IP does not cause issues during cluster creation process are here .

workerNodeGroupConfigurations (required)

This takes in a list of node groups that you can define for your workers. You may define one or more worker node groups.

workerNodeGroupConfigurations.count

Number of worker nodes. Optional if autoscalingConfiguration is used, in which case count will default to autoscalingConfiguration.minCount.

workerNodeGroupConfigurations.machineGroupRef (required)

Refers to the Kubernetes object with Nutanix specific configuration for your nodes. See NutanixMachineConfig fields below.

workerNodeGroupConfigurations.name (required)

Name of the worker node group (default: md-0)

workerNodeGroupConfigurations.autoscalingConfiguration.minCount

Minimum number of nodes for this node group’s autoscaling configuration.

workerNodeGroupConfigurations.autoscalingConfiguration.maxCount

Maximum number of nodes for this node group’s autoscaling configuration.

datacenterRef

Refers to the Kubernetes object with Nutanix environment specific configuration. See NutanixDatacenterConfig fields below.

kubernetesVersion (required)

The Kubernetes version you want to use for your cluster. Supported values: 1.25, 1.24, 1.23, 1.22, 1.21

NutanixDatacenterConfig Fields

endpoint (required)

The Prism Central server fully qualified domain name or IP address. If the server IP is used, the PC SSL certificate must have an IP SAN configured.

port (required)

The Prism Central server port. (Default: 9440)

insecure (optional)

Set insecure to true if the Prism Central server does not have a valid certificate. This is not recommended for production use cases. (Default: false)

additionalTrustBundle (optional; required if using a self-signed PC SSL certificate)

The PEM encoded CA trust bundle.

The additionalTrustBundle needs to be populated with the PEM-encoded x509 certificate of the Root CA that issued the certificate for Prism Central. Suggestions on how to obtain this certificate are here .

Example:

 additionalTrustBundle: |
    -----BEGIN CERTIFICATE-----
    <certificate string>
    -----END CERTIFICATE-----
    -----BEGIN CERTIFICATE-----
    <certificate string>
    -----END CERTIFICATE-----    

NutanixMachineConfig Fields

cluster

Reference to the Prism Element cluster.

cluster.type

Type to identify the Prism Element cluster. (Permitted values: name or uuid)

cluster.name

Name of the Prism Element cluster.

cluster.uuid

UUID of the Prism Element cluster.

image

Reference to the OS image used for the system disk.

image.type

Type to identify the OS image. (Permitted values: name or uuid)

image.name (name or UUID required)

Name of the image

image.uuid (name or UUID required)

UUID of the image

memorySize

Size of RAM on virtual machines (Default: 4Gi)

osFamily (optional)

Operating System on virtual machines. (Permitted values: ubuntu)

subnet

Reference to the subnet to be assigned to the VMs.

subnet.name (name or UUID required)

Name of the subnet.

subnet.type

Type to identify the subnet. (Permitted values: name or uuid)

subnet.uuid (name or UUID required)

UUID of the subnet.

systemDiskSize

Amount of storage assigned to the system disk. (Default: 40Gi)

vcpuSockets

Amount of vCPU sockets. (Default: 2)

vcpusPerSocket

Amount of vCPUs per socket. (Default: 1)

project (optional)

Reference to an existing project used for the virtual machines.

project.type

Type to identify the project. (Permitted values: name or uuid)

project.name (name or UUID required)

Name of the project

project.uuid (name or UUID required)

UUID of the project

users (optional)

The users you want to configure to access your virtual machines. Only one is permitted at this time.

users[0].name (optional)

The name of the user you want to configure to access your virtual machines through ssh.

The default is eksa if osFamily=ubuntu

users[0].sshAuthorizedKeys (optional)

The SSH public keys you want to configure to access your virtual machines through ssh (as described below). Only 1 is supported at this time.

users[0].sshAuthorizedKeys[0] (optional)

This is the SSH public key that will be placed in authorized_keys on all EKS Anywhere cluster VMs so you can ssh into them. The user will be what is defined under name above. For example:

ssh -i <private-key-file> <user>@<VM-IP>

The default is generating a key in your $(pwd)/<cluster-name> folder when not specifying a value

3 - Snow configuration

Full EKS Anywhere configuration reference for a AWS Snow cluster.

This is a generic template with detailed descriptions below for reference. The following additional optional configuration can also be included:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
spec:
  clusterNetwork:
    cniConfig:
      cilium: {}
    pods:
      cidrBlocks:
      - 10.1.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12
  controlPlaneConfiguration:
    count: 3
    endpoint:
      host: ""
    machineGroupRef:
      kind: SnowMachineConfig
      name: my-cluster-machines
  datacenterRef:
    kind: SnowDatacenterConfig
    name: my-cluster-datacenter
  externalEtcdConfiguration:
    count: 3
    machineGroupRef:
      kind: SnowMachineConfig
      name: my-cluster-machines
  kubernetesVersion: "1.25"
  workerNodeGroupConfigurations:
  - count: 1
    machineGroupRef:
      kind: SnowMachineConfig
      name: my-cluster-machines
    name: md-0
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: SnowDatacenterConfig
metadata:
  name: my-cluster-datacenter
spec: {}

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: SnowMachineConfig
metadata:
  name: my-cluster-machines
spec:
  amiID: ""
  instanceType: sbe-c.large
  sshKeyName: ""
  osFamily: ubuntu
  devices:
  - ""
  containersVolume:
    size: 25
  network:
    directNetworkInterfaces:
    - index: 1
      primary: true
      ipPoolRef:
        kind: SnowIPPool
        name: ip-pool-1
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: SnowIPPool
metadata:
  name: ip-pool-1
spec:
  pools:
  - ipStart: 192.168.1.2
    ipEnd: 192.168.1.14
    subnet: 192.168.1.0/24
    gateway: 192.168.1.1
  - ipStart: 192.168.1.55
    ipEnd: 192.168.1.250
    subnet: 192.168.1.0/24
    gateway: 192.168.1.1

Cluster Fields

name (required)

Name of your cluster my-cluster-name in this example

clusterNetwork (required)

Specific network configuration for your Kubernetes cluster.

clusterNetwork.cniConfig (required)

CNI plugin configuration to be used in the cluster. The only supported configuration at the moment is cilium.

clusterNetwork.cniConfig.cilium.policyEnforcementMode

Optionally, you may specify a policyEnforcementMode of default, always, never.

clusterNetwork.pods.cidrBlocks[0] (required)

Subnet used by pods in CIDR notation. Please note that only 1 custom pods CIDR block specification is permitted. This CIDR block should not conflict with the network subnet range selected for the devices.

clusterNetwork.services.cidrBlocks[0] (required)

Subnet used by services in CIDR notation. Please note that only 1 custom services CIDR block specification is permitted. This CIDR block should not conflict with the network subnet range selected for the devices.

clusterNetwork.dns.resolvConf.path (optional)

Path to the file with a custom DNS resolver configuration.

controlPlaneConfiguration (required)

Specific control plane configuration for your Kubernetes cluster.

controlPlaneConfiguration.count (required)

Number of control plane nodes

controlPlaneConfiguration.machineGroupRef (required)

Refers to the Kubernetes object with Snow specific configuration for your nodes. See SnowMachineConfig Fields below.

controlPlaneConfiguration.endpoint.host (required)

A unique IP you want to use for the control plane VM in your EKS Anywhere cluster. Choose an IP in your network range that does not conflict with other devices.

NOTE: This IP should be outside the network DHCP range as it is a floating IP that gets assigned to one of the control plane nodes for kube-apiserver loadbalancing.

controlPlaneConfiguration.taints

A list of taints to apply to the control plane nodes of the cluster.

Replaces the default control plane taint. For k8s versions prior to 1.24, it replaces node-role.kubernetes.io/master. For k8s versions 1.24+, it replaces node-role.kubernetes.io/control-plane. The default control plane components will tolerate the provided taints.

Modifying the taints associated with the control plane configuration will cause new nodes to be rolled-out, replacing the existing nodes.

NOTE: The taints provided will be used instead of the default control plane taint. Any pods that you run on the control plane nodes must tolerate the taints you provide in the control plane configuration.

controlPlaneConfiguration.labels

A list of labels to apply to the control plane nodes of the cluster. This is in addition to the labels that EKS Anywhere will add by default.

Modifying the labels associated with the control plane configuration will cause new nodes to be rolled out, replacing the existing nodes.

workerNodeGroupConfigurations (required)

This takes in a list of node groups that you can define for your workers. You may define one or more worker node groups.

workerNodeGroupConfigurations.count

Number of worker nodes. Optional if autoscalingConfiguration is used, in which case count will default to autoscalingConfiguration.minCount.

workerNodeGroupConfigurations.machineGroupRef (required)

Refers to the Kubernetes object with Snow specific configuration for your nodes. See SnowMachineConfig Fields below.

workerNodeGroupConfigurations.name (required)

Name of the worker node group (default: md-0)

workerNodeGroupConfigurations.autoscalingConfiguration.minCount

Minimum number of nodes for this node group’s autoscaling configuration.

workerNodeGroupConfigurations.autoscalingConfiguration.maxCount

Maximum number of nodes for this node group’s autoscaling configuration.

workerNodeGroupConfigurations.taints

A list of taints to apply to the nodes in the worker node group.

Modifying the taints associated with a worker node group configuration will cause new nodes to be rolled-out, replacing the existing nodes associated with the configuration.

At least one node group must not have NoSchedule or NoExecute taints applied to it.

workerNodeGroupConfigurations.labels

A list of labels to apply to the nodes in the worker node group. This is in addition to the labels that EKS Anywhere will add by default.

Modifying the labels associated with a worker node group configuration will cause new nodes to be rolled out, replacing the existing nodes associated with the configuration.

externalEtcdConfiguration.count

Number of etcd members.

externalEtcdConfiguration.machineGroupRef

Refers to the Kubernetes object with Snow specific configuration for your etcd members. See SnowMachineConfig Fields below.

datacenterRef

Refers to the Kubernetes object with Snow environment specific configuration. See SnowDatacenterConfig Fields below.

kubernetesVersion (required)

The Kubernetes version you want to use for your cluster. Supported values: 1.25, 1.24, 1.23, 1.22, 1.21

SnowDatacenterConfig Fields

identityRef

Refers to the Kubernetes secret object with Snow devices credentials used to reconcile the cluster.

SnowMachineConfig Fields

amiID (optional)

AMI ID from which to create the machine instance. Snow provider offers an AMI lookup logic which will look for a suitable AMI ID based on the Kubernetes version and osFamily if the field is empty.

instanceType (optional)

Type of the Snow EC2 machine instance. See Quotas for Compute Instances on a Snowball Edge Device for supported instance types on Snow (Default: sbe-c.large).

osFamily

Operating System on instance machines. Permitted value: ubuntu.

physicalNetworkConnector (optional)

Type of snow physical network connector to use for creating direct network interfaces. Permitted values: SFP_PLUS, QSFP, RJ45 (Default: SFP_PLUS).

sshKeyName (optional)

Name of the AWS Snow SSH key pair you want to configure to access your machine instances.

The default is eksa-default-{cluster-name}-{uuid}.

devices

A device IP list from which to bootstrap and provision machine instances.

network

Custom network setting for the machine instances. DHCP and static IP configurations are supported.

network.directNetworkInterfaces[0].index (optional)

Index number of a direct network interface (DNI) used to clarify the position in the list. Must be no smaller than 1 and no greater than 8.

network.directNetworkInterfaces[0].primary (optional)

Whether the DNI is primary or not. One and only one primary DNI is required in the directNetworkInterfaces list.

network.directNetworkInterfaces[0].vlanID (optional)

VLAN ID to use for the DNI.

network.directNetworkInterfaces[0].dhcp (optional)

Whether DHCP is to be used to assign IP for the DNI.

network.directNetworkInterfaces[0].ipPoolRef (optional)

Refers to a SnowIPPool object which provides a range of ip addresses. When specified, an IP address selected from the pool will be allocated to the DNI.

containersVolume (optional)

Configuration option for customizing containers data storage volume.

containersVolume.size

Size of the storage for containerd runtime in Gi.

The field is optional for Ubuntu and if specified, the size must be no smaller than 8 Gi.

containersVolume.deviceName (optional)

Containers volume device name.

containersVolume.type (optional)

Type of the containers volume. Permitted values: sbp1, sbg1. (Default: sbp1)

sbp1 stands for capacity-optimized HDD. sbg1 is performance-optimized SSD.

nonRootVolumes (optional)

Configuration options for the non root storage volumes.

nonRootVolumes[0].deviceName

Non root volume device name. Must be specified and cannot have prefix “/dev/sda” as it is reserved for root volume and containers volume.

nonRootVolumes[0].size

Size of the storage device for the non root volume. Must be no smaller than 8 Gi.

nonRootVolumes[0].type (optional)

Type of the non root volume. Permitted values: sbp1, sbg1. (Default: sbp1)

sbp1 stands for capacity-optimized HDD. sbg1 is performance-optimized SSD.

SnowIPPool Fields

pools[0].ipStart

Start address of an IP range.

pools[0].ipEnd

End address of an IP range.

pools[0].subnet

An IP subnet for determining whether an IP is within the subnet.

pools[0].gateway

Gateway of the subnet for routing purpose.

4 - vSphere configuration

Full EKS Anywhere configuration reference for a VMware vSphere cluster.

This is a generic template with detailed descriptions below for reference.

Key: Provider-specific values are in red ; Resources are in green ; Links to field descriptions are in blue

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name             # Name of the cluster (required)
spec:
   clusterNetwork:                   # Cluster network configuration (required)
      cniConfig:                     # Cluster CNI plugin - default: cilium (required)
         cilium: {}
      pods:
         cidrBlocks:                 # Subnet CIDR notation for pods (required)
            - 192.168.0.0/16
      services:
         cidrBlocks:                 # Subnet CIDR notation for services (required)
            - 10.96.0.0/12
   controlPlaneConfiguration:        # Specific cluster control plane config (required)
      count: 2                       # Number of control plane nodes (required)
      endpoint:                      # IP for control plane endpoint (required)
         host: "192.168.0.10"
      machineGroupRef:               # vSphere-specific Kubernetes node config (required)
        kind: VSphereMachineConfig
        name: my-cluster-machines
      taints:                        # Taints applied to control plane nodes 
      - key: "key1"
        value: "value1"
        effect: "NoSchedule"
      labels:                        # Labels applied to control plane nodes 
        "key1": "value1"
        "key2": "value2" 
   datacenterRef:                    # Kubernetes object with vSphere-specific config 
      kind: VSphereDatacenterConfig  
      name: my-cluster-datacenter
   externalEtcdConfiguration:
     count: 3                        # Number of etcd members 
     machineGroupRef:                # vSphere-specific Kubernetes etcd config
        kind: VSphereMachineConfig
        name: my-cluster-machines
   kubernetesVersion: "1.25"         # Kubernetes version to use for the cluster (required)
   workerNodeGroupConfigurations:    # List of node groups you can define for workers (required) 
   - count: 2                        # Number of worker nodes 
     machineGroupRef:                # vSphere-specific Kubernetes node objects (required) 
       kind: VSphereMachineConfig
       name: my-cluster-machines
     name: md-0                      # Name of the worker nodegroup (required) 
     taints:                         # Taints to apply to worker node group nodes 
     - key: "key1"                       
       value: "value1"
       effect: "NoSchedule"
     labels:                         # Labels to apply to worker node group nodes 
       "key1": "value1"
       "key2": "value2" 
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: VSphereDatacenterConfig
metadata:
   name: my-cluster-datacenter
spec:
  datacenter: "datacenter1"          # vSphere datacenter name on which to deploy EKS Anywhere (required) 
  disableCSI: false                  # Set to true to not have EKS Anywhere install and manage vSphere CSI driver
  server: "myvsphere.local"          # FQDN or IP address of vCenter server (required) 
  network: "network1"                # Path to the VM network on which to deploy EKS Anywhere (required) 
  insecure: false                    # Set to true if vCenter does not have a valid certificate 
  thumbprint: "1E:3B:A1:4C:B2:..."   # SHA1 thumprint of vCenter server certificate (required if insecure=false)

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: VSphereMachineConfig
metadata:
   name: my-cluster-machines
spec:
  diskGiB:  25                       # Size of disk on VMs, if no snapshots
  datastore: "datastore1"            # Path to vSphere datastore to deploy EKS Anywhere on (required)
  folder: "folder1"                  # Path to VM folder for EKS Anywhere cluster VMs (required)
  numCPUs: 2                         # Number of CPUs on virtual machines
  memoryMiB: 8192                    # Size of RAM on VMs
  osFamily: "bottlerocket"           # Operating system on VMs
  resourcePool: "resourcePool1"      # vSphere resource pool for EKS Anywhere VMs (required)
  storagePolicyName: "storagePolicy1"    # Storage policy name associated with VMs
  template: "bottlerocket-kube-v1-25"    # VM template for EKS Anywhere (required for RHEL/Ubuntu-based OVAs)
  users:                             # Add users to access VMs via SSH
  - name: "ec2-user"                 # Name of each user set to access VMs
    sshAuthorizedKeys:               # SSH keys for user needed to access VMs
    - "ssh-rsa AAAAB3NzaC1yc2E..."
  tags:                              # List of tags to attach to cluster VMs, in URN format
  - "urn:vmomi:InventoryServiceTag:5b3e951f-4e1d-4511-95b1-5ba1ea97245c:GLOBAL"
  - "urn:vmomi:InventoryServiceTag:cfee03d0-0189-4f27-8c65-fe75086a86cd:GLOBAL"

The following additional optional configuration can also be included:

Cluster Fields

name (required)

Name of your cluster my-cluster-name in this example

clusterNetwork (required)

Specific network configuration for your Kubernetes cluster.

clusterNetwork.cniConfig (required)

CNI plugin configuration to be used in the cluster. The only supported configuration at the moment is cilium.

clusterNetwork.cniConfig.cilium.policyEnforcementMode

Optionally, you may specify a policyEnforcementMode of default, always, never.

clusterNetwork.pods.cidrBlocks[0] (required)

Subnet used by pods in CIDR notation. Please note that only 1 custom pods CIDR block specification is permitted. This CIDR block should not conflict with the network subnet range selected for the VMs.

clusterNetwork.services.cidrBlocks[0] (required)

Subnet used by services in CIDR notation. Please note that only 1 custom services CIDR block specification is permitted. This CIDR block should not conflict with the network subnet range selected for the VMs.

clusterNetwork.dns.resolvConf.path (optional)

Path to the file with a custom DNS resolver configuration.

controlPlaneConfiguration (required)

Specific control plane configuration for your Kubernetes cluster.

controlPlaneConfiguration.count (required)

Number of control plane nodes

controlPlaneConfiguration.machineGroupRef (required)

Refers to the Kubernetes object with vsphere specific configuration for your nodes. See VSphereMachineConfig Fields below.

controlPlaneConfiguration.endpoint.host (required)

A unique IP you want to use for the control plane VM in your EKS Anywhere cluster. Choose an IP in your network range that does not conflict with other VMs.

NOTE: This IP should be outside the network DHCP range as it is a floating IP that gets assigned to one of the control plane nodes for kube-apiserver loadbalancing. Suggestions on how to ensure this IP does not cause issues during cluster creation process are here

controlPlaneConfiguration.taints

A list of taints to apply to the control plane nodes of the cluster.

Replaces the default control plane taint. For k8s versions prior to 1.24, it replaces node-role.kubernetes.io/master. For k8s versions 1.24+, it replaces node-role.kubernetes.io/control-plane. The default control plane components will tolerate the provided taints.

Modifying the taints associated with the control plane configuration will cause new nodes to be rolled-out, replacing the existing nodes.

NOTE: The taints provided will be used instead of the default control plane taint. Any pods that you run on the control plane nodes must tolerate the taints you provide in the control plane configuration.

controlPlaneConfiguration.labels

A list of labels to apply to the control plane nodes of the cluster. This is in addition to the labels that EKS Anywhere will add by default.

Modifying the labels associated with the control plane configuration will cause new nodes to be rolled out, replacing the existing nodes.

workerNodeGroupConfigurations (required)

This takes in a list of node groups that you can define for your workers. You may define one or more worker node groups.

workerNodeGroupConfigurations.count

Number of worker nodes. Optional if autoscalingConfiguration is used, in which case count will default to autoscalingConfiguration.minCount.

workerNodeGroupConfigurations.machineGroupRef (required)

Refers to the Kubernetes object with vsphere specific configuration for your nodes. See VSphereMachineConfig Fields below.

workerNodeGroupConfigurations.name (required)

Name of the worker node group (default: md-0)

workerNodeGroupConfigurations.autoscalingConfiguration.minCount

Minimum number of nodes for this node group’s autoscaling configuration.

workerNodeGroupConfigurations.autoscalingConfiguration.maxCount

Maximum number of nodes for this node group’s autoscaling configuration.

workerNodeGroupConfigurations.taints

A list of taints to apply to the nodes in the worker node group.

Modifying the taints associated with a worker node group configuration will cause new nodes to be rolled-out, replacing the existing nodes associated with the configuration.

At least one node group must not have NoSchedule or NoExecute taints applied to it.

workerNodeGroupConfigurations.labels

A list of labels to apply to the nodes in the worker node group. This is in addition to the labels that EKS Anywhere will add by default.

Modifying the labels associated with a worker node group configuration will cause new nodes to be rolled out, replacing the existing nodes associated with the configuration.

externalEtcdConfiguration.count

Number of etcd members

externalEtcdConfiguration.machineGroupRef

Refers to the Kubernetes object with vsphere specific configuration for your etcd members. See VSphereMachineConfig Fields below.

datacenterRef

Refers to the Kubernetes object with vsphere environment specific configuration. See VSphereDatacenterConfig Fields below.

kubernetesVersion (required)

The Kubernetes version you want to use for your cluster. Supported values: 1.25, 1.24, 1.23, 1.22, 1.21

VSphereDatacenterConfig Fields

datacenter (required)

The name of the vSphere datacenter to deploy the EKS Anywhere cluster on. For example SDDC-Datacenter.

network (required)

The path to the VM network to deploy your EKS Anywhere cluster on. For example, /<DATACENTER>/network/<NETWORK_NAME>. Use govc find -type n to see a list of networks.

server (required)

The vCenter server fully qualified domain name or IP address. If the server IP is used, the thumbprint must be set or insecure must be set to true.

insecure (optional)

Set insecure to true if the vCenter server does not have a valid certificate. (Default: false)

thumbprint (required if insecure=false)

The SHA1 thumbprint of the vCenter server certificate which is only required if you have a self signed certificate.

There are several ways to obtain your vCenter thumbprint. The easiest way is if you have govc installed, you can run:

govc about.cert -thumbprint -k

Another way is from the vCenter web UI, go to Administration/Certificate Management and click view details of the machine certificate. The format of this thumbprint does not exactly match the format required though and you will need to add : to separate each hexadecimal value.

Another way to get the thumbprint is use this command with your servers certificate in a file named ca.crt:

openssl x509 -sha1 -fingerprint -in ca.crt -noout

If you specify the wrong thumbprint, an error message will be printed with the expected thumbprint. If no valid certificate is being used, insecure must be set to true.

disableCSI (optional)

Set disableCSI to true if you don’t want to have EKS Anywhere install and manage the vSphere CSI driver for you. More details on the driver are here

NOTE: If you upgrade a cluster and disable the vSphere CSI driver after it has already been installed by EKS Anywhere, you will need to remove the resources manually from the cluster. Delete the DaemonSet and Deployment first, as they rely on the other resources. This should be done after setting disableCSI to true and running upgrade cluster.

These are the resources you would need to delete:

  • vsphere-csi-controller-role (kind: ClusterRole)
  • vsphere-csi-controller-binding (kind: ClusterRoleBinding)
  • csi.vsphere.vmware.com (kind: CSIDriver)

These are the resources you would need to delete in the kube-system namespace:

  • vsphere-csi-controller (kind: ServiceAccount)
  • csi-vsphere-config (kind: Secret)
  • vsphere-csi-node (kind: DaemonSet)
  • vsphere-csi-controller (kind: Deployment)

These are the resources you would need to delete in the eksa-system namespace from the management cluster.

  • <cluster-name>-csi (kind: ClusterResourceSet)

Note: If your cluster is self-managed, you would delete <cluster-name>-csi (kind: ClusterResourceSet) from the same cluster.

VSphereMachineConfig Fields

memoryMiB (optional)

Size of RAM on virtual machines (Default: 8192)

numCPUs (optional)

Number of CPUs on virtual machines (Default: 2)

osFamily (optional)

Operating System on virtual machines. Permitted values: bottlerocket, ubuntu, redhat (Default: bottlerocket)

diskGiB (optional)

Size of disk on virtual machines if snapshots aren’t included (Default: 25)

users (optional)

The users you want to configure to access your virtual machines. Only one is permitted at this time

users[0].name (optional)

The name of the user you want to configure to access your virtual machines through ssh.

The default is ec2-user if osFamily=bottlrocket and capv if osFamily=ubuntu

users[0].sshAuthorizedKeys (optional)

The SSH public keys you want to configure to access your virtual machines through ssh (as described below). Only 1 is supported at this time.

users[0].sshAuthorizedKeys[0] (optional)

This is the SSH public key that will be placed in authorized_keys on all EKS Anywhere cluster VMs so you can ssh into them. The user will be what is defined under name above. For example:

ssh -i <private-key-file> <user>@<VM-IP>

The default is generating a key in your $(pwd)/<cluster-name> folder when not specifying a value

template (optional)

The VM template to use for your EKS Anywhere cluster. This template was created when you imported the OVA file into vSphere . This is a required field if you are using Ubuntu-based or RHEL-based OVAs.

datastore (required)

The path to the vSphere datastore to deploy your EKS Anywhere cluster on, for example /<DATACENTER>/datastore/<DATASTORE_NAME>. Use govc find -type s to get a list of datastores.

folder (required)

The path to a VM folder for your EKS Anywhere cluster VMs. This allows you to organize your VMs. If the folder does not exist, it will be created for you. If the folder is blank, the VMs will go in the root folder. For example /<DATACENTER>/vm/<FOLDER_NAME>/.... Use govc find -type f to get a list of existing folders.

resourcePool (required)

The vSphere Resource pools for your VMs in the EKS Anywhere cluster. Examples of resource pool values include:

  • If there is no resource pool: /<datacenter>/host/<cluster-name>/Resources
  • If there is a resource pool: /<datacenter>/host/<cluster-name>/Resources/<resource-pool-name>
  • The wild card option */Resources also often works.

Use govc find -type p to get a list of available resource pools.

storagePolicyName (optional)

The storage policy name associated with your VMs. Generally this can be left blank. Use govc storage.policy.ls to get a list of available storage policies.

tags (optional)

Optional list of tags to attach to your cluster VMs in the URN format.

Example:

  tags:
  - urn:vmomi:InventoryServiceTag:8e0ce079-0675-47d6-8665-16ada4e6dabd:GLOBAL

Optional VSphere Credentials

Use the following environment variables to configure Cloud Provider and CSI Driver with different credentials.

EKSA_VSPHERE_CP_USERNAME

Username for Cloud Provider (Default: $EKSA_VSPHERE_USERNAME).

EKSA_VSPHERE_CP_PASSWORD

Password for Cloud Provider (Default: $EKSA_VSPHERE_PASSWORD).

EKSA_VSPHERE_CSI_USERNAME

Username for CSI Driver (Default: $EKSA_VSPHERE_USERNAME).

EKSA_VSPHERE_CSI_PASSWORD

Password for CSI Driver (Default: $EKSA_VSPHERE_PASSWORD).

5 - CloudStack configuration

Full EKS Anywhere configuration reference for a CloudStack cluster.

This is a generic template with detailed descriptions below for reference. The following additional optional configuration can also be included:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
spec:
  clusterNetwork:
    cniConfig:
      cilium: {}
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12
  controlPlaneConfiguration:
    count: 3
    endpoint:
      host: ""
    machineGroupRef:
      kind: CloudStackMachineConfig
      name: my-cluster-name-cp
    taints:
    - key: ""
      value: ""
      effect: ""
    labels:
      "<key1>": ""
      "<key2>": ""
  datacenterRef:
    kind: CloudStackDatacenterConfig
    name: my-cluster-name
  externalEtcdConfiguration:
    count: 3
    machineGroupRef:
      kind: CloudStackMachineConfig
      name: my-cluster-name-etcd
  kubernetesVersion: "1.23"
  managementCluster:
    name: my-cluster-name
  workerNodeGroupConfigurations:
  - count: 2
    machineGroupRef:
      kind: CloudStackMachineConfig
      name: my-cluster-name
    taints:
    - key: ""
      value: ""
      effect: ""
    labels:
      "<key1>": ""
      "<key2>": ""
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: CloudStackDatacenterConfig
metadata:
  name: my-cluster-name-datacenter
spec:
  availabilityZones:
  - account: admin
    credentialsRef: global
    domain: domain1
    managementApiEndpoint: ""
    name: az-1
    zone:
      name: zone1
      network:
        name: "net1"
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: CloudStackMachineConfig
metadata:
  name: my-cluster-name-cp
spec:
  computeOffering:
    name: "m4-large"
  users:
  - name: capc
    sshAuthorizedKeys:
    - ssh-rsa AAAA...
  template:
    name: "rhel8-k8s-118"
  diskOffering:
    name: "Small"
    mountPath: "/data-small"
    device: "/dev/vdb"
    filesystem: "ext4"
    label: "data_disk"
  symlinks:
    /var/log/kubernetes: /data-small/var/log/kubernetes
  affinityGroupIds:
  - control-plane-anti-affinity
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: CloudStackMachineConfig
metadata:
  name: my-cluster-name
spec:
  computeOffering:
    name: "m4-large"
  users:
  - name: capc
    sshAuthorizedKeys:
    - ssh-rsa AAAA...
  template:
    name: "rhel8-k8s-118"
  diskOffering:
    name: "Small"
    mountPath: "/data-small"
    device: "/dev/vdb"
    filesystem: "ext4"
    label: "data_disk"
  symlinks:
    /var/log/pods: /data-small/var/log/pods
    /var/log/containers: /data-small/var/log/containers
  affinityGroupIds:
  - worker-affinity
  userCustomDetails:
    foo: bar
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: CloudStackMachineConfig
metadata:
  name: my-cluster-name-etcd
spec:
  computeOffering: {}
    name: "m4-large"
  users:
  - name: "capc"
    sshAuthorizedKeys: 
    - "ssh-rsa AAAAB3N...
  template:
    name: "rhel8-k8s-118"
  diskOffering:
    name: "Small"
    mountPath: "/data-small"
    device: "/dev/vdb"
    filesystem: "ext4"
    label: "data_disk"
  symlinks:
    /var/lib: /data-small/var/lib
  affinityGroupIds:
  - etcd-affinity
---

Cluster Fields

name (required)

Name of your cluster my-cluster-name in this example

clusterNetwork (required)

Specific network configuration for your Kubernetes cluster.

clusterNetwork.cniConfig (required)

CNI plugin configuration to be used in the cluster. The only supported configuration at the moment is cilium.

clusterNetwork.cniConfig.cilium.policyEnforcementMode

Optionally, you may specify a policyEnforcementMode of default, always, never.

clusterNetwork.pods.cidrBlocks[0] (required)

Subnet used by pods in CIDR notation. Please note that only 1 custom pods CIDR block specification is permitted. This CIDR block should not conflict with the network subnet range selected for the VMs.

clusterNetwork.services.cidrBlocks[0] (required)

Subnet used by services in CIDR notation. Please note that only 1 custom services CIDR block specification is permitted. This CIDR block should not conflict with the network subnet range selected for the VMs.

controlPlaneConfiguration (required)

Specific control plane configuration for your Kubernetes cluster.

controlPlaneConfiguration.count (required)

Number of control plane nodes

controlPlaneConfiguration.endpoint.host (required)

A unique IP you want to use for the control plane VM in your EKS Anywhere cluster. Choose an IP in your network range that does not conflict with other VMs.

NOTE: This IP should be outside the network DHCP range as it is a floating IP that gets assigned to one of the control plane nodes for kube-apiserver loadbalancing. Suggestions on how to ensure this IP does not cause issues during cluster creation process are here

controlPlaneConfiguration.machineGroupRef (required)

Refers to the Kubernetes object with CloudStack specific configuration for your nodes. See CloudStackMachineConfig Fields below.

controlPlaneConfiguration.taints

A list of taints to apply to the control plane nodes of the cluster.

Replaces the default control plane taint, node-role.kubernetes.io/master. The default control plane components will tolerate the provided taints.

Modifying the taints associated with the control plane configuration will cause new nodes to be rolled-out, replacing the existing nodes.

NOTE: The taints provided will be used instead of the default control plane taint node-role.kubernetes.io/master. Any pods that you run on the control plane nodes must tolerate the taints you provide in the control plane configuration.

controlPlaneConfiguration.labels

A list of labels to apply to the control plane nodes of the cluster. This is in addition to the labels that EKS Anywhere will add by default.

A special label value is supported by the CAPC provider:

    labels:
      cluster.x-k8s.io/failure-domain: ds.meta_data.failuredomain

The ds.meta_data.failuredomain value will be replaced with a failuredomain name where the node is deployed, such as az-1.

Modifying the labels associated with the control plane configuration will cause new nodes to be rolled out, replacing the existing nodes.

datacenterRef

Refers to the Kubernetes object with CloudStack environment specific configuration. See CloudStackDatacenterConfig Fields below.

externalEtcdConfiguration.count

Number of etcd members

externalEtcdConfiguration.machineGroupRef

Refers to the Kubernetes object with CloudStack specific configuration for your etcd members. See CloudStackMachineConfig Fields below.

kubernetesVersion (required)

The Kubernetes version you want to use for your cluster. Supported values: 1.24, 1.23, 1.22, 1.21

managementCluster (required)

Identifies the name of the management cluster. If this is a standalone cluster or if it were serving as the management cluster for other workload clusters, this will be the same as the cluster name.

workerNodeGroupConfigurations (required)

This takes in a list of node groups that you can define for your workers. You may define one or more worker node groups.

workerNodeGroupConfigurations.count

Number of worker nodes. Optional if autoscalingConfiguration is used, in which case count will default to autoscalingConfiguration.minCount.

workerNodeGroupConfigurations.machineGroupRef (required)

Refers to the Kubernetes object with CloudStack specific configuration for your nodes. See CloudStackMachineConfig Fields below.

workerNodeGroupConfigurations.name (required)

Name of the worker node group (default: md-0)

workerNodeGroupConfigurations.autoscalingConfiguration.minCount

Minimum number of nodes for this node group’s autoscaling configuration.

workerNodeGroupConfigurations.autoscalingConfiguration.maxCount

Maximum number of nodes for this node group’s autoscaling configuration.

workerNodeGroupConfigurations.taints

A list of taints to apply to the nodes in the worker node group.

Modifying the taints associated with a worker node group configuration will cause new nodes to be rolled-out, replacing the existing nodes associated with the configuration.

At least one node group must not have NoSchedule or NoExecute taints applied to it.

workerNodeGroupConfigurations.labels

A list of labels to apply to the nodes in the worker node group. This is in addition to the labels that EKS Anywhere will add by default. A special label value is supported by the CAPC provider:

    labels:
      cluster.x-k8s.io/failure-domain: ds.meta_data.failuredomain

The ds.meta_data.failuredomain value will be replaced with a failuredomain name where the node is deployed, such as az-1.

Modifying the labels associated with a worker node group configuration will cause new nodes to be rolled out, replacing the existing nodes associated with the configuration.

CloudStackDatacenterConfig

availabilityZones.account (optional)

Account used to access CloudStack. As long as you pass valid credentials, through availabilityZones.credentialsRef, this value is not required.

availabilityZones.credentialsRef (required)

If you passed credentials through the environment variable EKSA_CLOUDSTACK_B64ENCODED_SECRET noted in Create CloudStack production cluster , you can identify those credentials here. For that example, you would use the profile name global. You can instead use a previously created secret on the Kubernetes cluster in the eksa-system namespace.

availabilityZones.domain (optional)

CloudStack domain to deploy the cluster. The default is ROOT.

availabilityZones.managementApiEndpoint (required)

Location of the CloudStack API management endpoint. For example, http://10.11.0.2:8080/client/api.

availabilityZones.{id,name} (required)

Name or ID of the CloudStack zone on which to deploy the cluster.

availabilityZones.zone.network.{id,name} (required)

CloudStack network name or ID to use with the cluster.

CloudStackMachineConfig

In the example above, there are separate CloudStackMachineConfig sections for the control plane (my-cluster-name-cp), worker (my-cluster-name) and etcd (my-cluster-name-etcd) nodes.

computeOffering.{id,name} (required)

Name or ID of the CloudStack compute instance.

users[0].name (optional)

The name of the user you want to configure to access your virtual machines through ssh. You can add as many users object as you want.

The default is capc.

users[0].sshAuthorizedKeys (optional)

The SSH public keys you want to configure to access your virtual machines through ssh (as described below). Only 1 is supported at this time.

users[0].sshAuthorizedKeys[0] (optional)

This is the SSH public key that will be placed in authorized_keys on all EKS Anywhere cluster VMs so you can ssh into them. The user will be what is defined under name above. For example:

ssh -i <private-key-file> <user>@<VM-IP>

The default is generating a key in your $(pwd)/<cluster-name> folder when not specifying a value.

template.{id,name} (required)

The VM template to use for your EKS Anywhere cluster. Currently, a VM based on RHEL 8.6 is required. This can be a name or ID. See the Artifacts page for instructions for building RHEL-based images.

diskOffering (optional)

Name representing a disk you want to mount into nodes for this CloudStackMachineConfig

diskOffering.mountPath (optional)

Mount point on which to mount the disk.

diskOffering.device (optional)

Device name of the disk partition to mount.

diskOffering.filesystem (optional)

File system type used to format the filesystem on the disk.

diskOffering.label (optional)

Label to apply to the disk partition.

Symbolic link of a directory or file you want to mount from the host filesystem to the mounted filesystem.

userCustomDetails (optional)

Add key/value pairs to nodes in a CloudStackMachineConfig. These can be used for things like identifying sets of nodes that you want to add to a security group that opens selected ports.

affinityGroupIDs (optional)

Group ID to attach to the set of host systems to indicate how affinity is done for services on those systems.

affinity (optional)

Allows you to set pro and anti affinity for the CloudStackMachineConfig. This can be used in a mutually exclusive fashion with the affinityGroupIDs field.

6 - Optional configuration

Config reference to optional features for EKS Anywhere clusters

6.1 - Autoscaling configuration

EKS Anywhere cluster yaml autoscaling configuration specification reference

Cluster Autoscaling (Optional)

Cluster Autoscaler configuration in EKS Anywhere cluster spec

EKS Anywhere supports autoscaling worker node groups using the Kubernetes Cluster Autoscaler ’s clusterapi cloudProvider.

  • Configure a worker node group to be picked up by a cluster autoscaler deployment by adding a autoscalingConfiguration block to the workerNodeGroupConfiguration:
    apiVersion: anywhere.eks.amazonaws.com/v1alpha1
    kind: Cluster
    metadata:
      name: my-cluster-name
    spec:
      workerNodeGroupConfigurations:
        - autoscalingConfiguration:
            minCount: 1
            maxCount: 5
          machineGroupRef:
            kind: VSphereMachineConfig
            name: worker-machine-a
          name: md-0
        - count: 1
          autoscalingConfiguration:
            minCount: 1
            maxCount: 3
          machineGroupRef:
            kind: VSphereMachineConfig
            name: worker-machine-b
          name: md-1
    

Note that if no count is specified it will default to the minCount value.

EKS Anywhere will automatically apply the following annotations to your MachineDeployment objects:

cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: <minCount>
cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: <maxCount>

After deploying the Kubernetes Cluster Autoscaler from upstream or as a curated package , the deployment will pick up your MachineDeployment and scale the nodes as per your min and max count values.

Cluster Autoscaler Deployment Topologies

The Kubernetes Cluster Autoscaler can only scale a single cluster per deployment.

This means that each cluster you want to scale will need its own cluster autoscaler deployment.

We support three deployment topologies:

  1. Cluster Autoscaler deployed in the management cluster to autoscale the management cluster itself
  2. Cluster Autoscaler deployed in the management cluster to autoscale a remote workload cluster
  3. Cluster Autoscaler deployed in the workload cluster to autoscale the workload cluster itself

If your cluster architecture supports management clusters with resources to run additional workloads, you may want to consider using deployment topologies (1) and (2). Instructions for using this topology can be found here .

If your deployment topology runs small management clusters though, you may want to follow deployment topology (3) and deploy the cluster autoscaler to run in a workload cluster .

6.2 - CNI plugin configuration

EKS Anywhere cluster yaml cni plugin specification reference

Specifying CNI Plugin in EKS Anywhere cluster spec

EKS Anywhere currently supports two CNI plugins: Cilium and Kindnet. Only one of them can be selected for a cluster, and the plugin cannot be changed once the cluster is created. Up until the 0.7.x releases, the plugin had to be specified using the cni field on cluster spec. Starting with release 0.8, the plugin should be specified using the new cniConfig field as follows:

  • For selecting Cilium as the CNI plugin:

    apiVersion: anywhere.eks.amazonaws.com/v1alpha1
    kind: Cluster
    metadata:
      name: my-cluster-name
    spec:
      clusterNetwork:
        pods:
          cidrBlocks:
          - 192.168.0.0/16
        services:
          cidrBlocks:
          - 10.96.0.0/12
        cniConfig:
          cilium: {}
    

    EKS Anywhere selects this as the default plugin when generating a cluster config.

  • Or for selecting Kindnetd as the CNI plugin:

    apiVersion: anywhere.eks.amazonaws.com/v1alpha1
    kind: Cluster
    metadata:
      name: my-cluster-name
    spec:
      clusterNetwork:
        pods:
          cidrBlocks:
          - 192.168.0.0/16
        services:
          cidrBlocks:
          - 10.96.0.0/12
        cniConfig:
          kindnetd: {}
    

NOTE: EKS Anywhere allows specifying only 1 plugin for a cluster and does not allow switching the plugins after the cluster is created.

Policy Configuration options for Cilium plugin

Cilium accepts policy enforcement modes from the users to determine the allowed traffic between pods. The allowed values for this mode are: default, always and never. Please refer the official Cilium documentation for more details on how each mode affects the communication within the cluster and choose a mode accordingly. You can choose to not set this field so that cilium will be launched with the default mode. Starting release 0.8, Cilium’s policy enforcement mode can be set through the cluster spec as follows:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
spec:
  clusterNetwork:
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12
    cniConfig:
      cilium: 
        policyEnforcementMode: "always"

Please note that if the always mode is selected, all communication between pods is blocked unless NetworkPolicy objects allowing communication are created. In order to ensure that the cluster gets created successfully, EKS Anywhere will create the required NetworkPolicy objects for all its core components. But it is up to the user to create the NetworkPolicy objects needed for the user workloads once the cluster is created.

Network policies created by EKS Anywhere for “always” mode

As mentioned above, if Cilium is configured with policyEnforcementMode set to always, EKS Anywhere creates NetworkPolicy objects to enable communication between its core components. EKS Anywhere will create NetworkPolicy resources in the following namespaces allowing all ingress/egress traffic by default:

  • kube-system
  • eksa-system
  • All core Cluster API namespaces:
    • capi-system
    • capi-kubeadm-bootstrap-system
    • capi-kubeadm-control-plane-system
    • etcdadm-bootstrap-provider-system
    • etcdadm-controller-system
    • cert-manager
  • Infrastructure provider’s namespace (for instance, capd-system OR capv-system)
  • If Gitops is enabled, then the gitops namespace (flux-system by default)

This is the NetworkPolicy that will be created in these namespaces for the cluster:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-all-ingress-egress
  namespace: test
spec:
  podSelector: {}
  ingress:
  - {}
  egress:
  - {}
  policyTypes:
  - Ingress
  - Egress

Switching the Cilium policy enforcement mode

The policy enforcement mode for Cilium can be changed as a part of cluster upgrade through the cli upgrade command.

  1. Switching to always mode: When switching from default/never to always mode, EKS Anywhere will create the required NetworkPolicy objects for its core components (listed above). This will ensure that the cluster gets upgraded successfully. But it is up to the user to create the NetworkPolicy objects required for the user workloads.

  2. Switching from always mode: When switching from always to default mode, EKS Anywhere will not delete any of the existing NetworkPolicy objects, including the ones required for EKS Anywhere components (listed above). The user must delete NetworkPolicy objects as needed.

Node IPs configuration option

Starting with release v0.10, the node-cidr-mask-size flag for Kubernetes controller manager (kube-controller-manager) is configurable via the EKS anywhere cluster spec. The clusterNetwork.nodes being an optional field, is not generated in the EKS Anywhere spec using generate clusterconfig command. This block for nodes will need to be manually added to the cluster spec under the clusterNetwork section:

  clusterNetwork:
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12
    cniConfig:
      cilium: {}
    nodes:
      cidrMaskSize: 24

If the user does not specify the clusterNetwork.nodes field in the cluster yaml spec, the value for this flag defaults to 24 for IPv4. Please note that this mask size needs to be greater than the pods CIDR mask size. In the above spec, the pod CIDR mask size is 16 and the node CIDR mask size is 24. This ensures the cluster 256 blocks of /24 networks. For example, node1 will get 192.168.0.0/24, node2 will get 192.168.1.0/24, node3 will get 192.168.2.0/24 and so on.

To support more than 256 nodes, the cluster CIDR block needs to be large, and the node CIDR mask size needs to be small, to support that many IPs. For instance, to support 1024 nodes, a user can do any of the following things

  • Set the pods cidr blocks to 192.168.0.0/16 and node cidr mask size to 26
  • Set the pods cidr blocks to 192.168.0.0/15 and node cidr mask size to 25

Please note that the node-cidr-mask-size needs to be large enough to accommodate the number of pods you want to run on each node. A size of 24 will give enough IP addresses for about 250 pods per node, however a size of 26 will only give you about 60 IPs. This is an immutable field, and the value can’t be updated once the cluster has been created.

6.3 - IAM Roles for Service Accounts configuration

EKS Anywhere cluster spec for Pod IAM (IRSA)

IAM Role for Service Account on EKS Anywhere clusters with self-hosted signing keys

IAM Roles for Service Account (IRSA) enables applications running in clusters to authenticate with AWS services using IAM roles. The current solution for leveraging this in EKS Anywhere involves creating your own OIDC provider for the cluster, and hosting your cluster’s public service account signing key. The public keys along with the OIDC discovery document should be hosted somewhere that AWS STS can discover it. The steps below assume the keys will be hosted on a publicly accessible S3 bucket. Refer this doc to ensure that the s3 bucket is publicly accessible.

The steps below are based on the guide for configuring IRSA for DIY Kubernetes, with modifications specific to EKS Anywhere’s cluster provisioning workflow. The main modification is the process of generating the keys.json document. As per the original guide, the user has to create the service account signing keys, and then use that to create the keys.json document prior to cluster creation. This order is reversed for EKS Anywhere clusters, so you will create the cluster first, and then retrieve the service account signing key generated by the cluster, and use it to create the keys.json document. The sections below show how to do this in detail.

Create an OIDC provider and make its discovery document publicly accessible

  1. Create an s3 bucket to host the public signing keys and OIDC discovery document for your cluster as per this section. Ensure you follow all the steps and save the $HOSTNAME and $ISSUER_HOSTPATH.

  2. Create the OIDC discovery document as follows:

    cat <<EOF > discovery.json
    {
        "issuer": "https://$ISSUER_HOSTPATH",
        "jwks_uri": "https://$ISSUER_HOSTPATH/keys.json",
        "authorization_endpoint": "urn:kubernetes:programmatic_authorization",
        "response_types_supported": [
            "id_token"
        ],
        "subject_types_supported": [
            "public"
        ],
        "id_token_signing_alg_values_supported": [
            "RS256"
        ],
        "claims_supported": [
            "sub",
            "iss"
        ]
    }
    EOF
    
  3. Upload it to the publicly accessible S3 bucket:

    aws s3 cp --acl public-read ./discovery.json s3://$S3_BUCKET/.well-known/openid-configuration
    
  4. Create an OIDC provider for your cluster. Set the Provider URL to https://$ISSUER_HOSTPATH, and audience to sts.amazonaws.com.

  5. Note down the Provider field of OIDC provider after it is created.

  6. Assign an IAM role to this OIDC provider.

    1. From the IAM console, select and click on the OIDC provider created from above, and click on Assign role at the top right.

    2. Select Create a new role.

    3. In the Select type of trusted entity section, choose Web identity.

    4. In the Choose a web identity provider section:

      • For Identity provider, choose the auto selected Identity Provider URL for your cluster.
      • For Audience, choose sts.amazonaws.com.
    5. Choose Next: Permissions.

    6. In the Attach Policy section, select the IAM policy that has the permissions that you want your applications running in the pods to use.

    7. Continue with the next sections of adding tags if desired and a suitable name for this role and create the role.

    8. Below is a sample trust policy of IAM role for your pods. Remember to replace Account ID and ISSUER_HOSTPATH with required values.

      {
       "Version": "2012-10-17",
       "Statement": [
        {
         "Effect": "Allow",
         "Principal": {
          "Federated": "arn:aws:iam::111122223333:oidc-provider/ISSUER_HOSTPATH"
         },
         "Action": "sts:AssumeRoleWithWebIdentity",
         "Condition": {
          "__doc_comment": "scope the role to the service account (optional)",
          "StringEquals": {
           "ISSUER_HOSTPATH:sub": "system:serviceaccount:default:my-serviceaccount"
          },
          "__doc_comment": "OR scope the role to a namespace (optional)",
          "StringLike": {
           "ISSUER_HOSTPATH/CLUSTER_ID:sub": ["system:serviceaccount:default:*","system:serviceaccount:observability:*"]
          }
         }
        }
       ]
      }
      
    9. After the role is created, note down the name of this IAM Role as OIDC_IAM_ROLE.

    10. Once the cluster is created, you can create service accounts and grant them this role by editing the trust relationship of this role. You can use StringLike condition to add required service accounts, as mentioned in the above sample. Please also refer to section Configure the trust relationship for the OIDC provider’s IAM Role .

Create the EKS Anywhere cluster

  1. When creating the EKS Anywhere cluster, you need to configure the kube-apiserver’s service-account-issuer flag so it can issue and mount projected service account tokens in pods. For this, use the value obtained in the first section for $ISSUER_HOSTPATH as the service-account-issuer. Configure the kube-apiserver by setting this value through the EKS Anywhere cluster spec as follows:
    apiVersion: anywhere.eks.amazonaws.com/v1alpha1
    kind: Cluster
    metadata:
      name: my-cluster-name
    spec:
      podIamConfig:
        serviceAccountIssuer: https://$ISSUER_HOSTPATH
    

Set the remaining fields in cluster spec as required and create the cluster using the eksctl anywhere create cluster command.

Generate keys.json and make it publicly accessible

  1. The cluster provisioning workflow generates a pair of service account signing keys. Retrieve the public signing key generated and used by the cluster, and create a keys.json document containing the public signing key.

    git clone https://github.com/aws/amazon-eks-pod-identity-webhook
    cd amazon-eks-pod-identity-webhook
    kubectl get secret ${CLUSTER_NAME}-sa -n eksa-system -o jsonpath={.data.tls\\.crt} | base64 --decode > ${CLUSTER_NAME}-sa.pub    
    go run ./hack/self-hosted/main.go -key ${CLUSTER_NAME}-sa.pub | jq '.keys += [.keys[0]] | .keys[1].kid = ""' > keys.json
    
  2. Upload the keys.json document to the s3 bucket.

    aws s3 cp --acl public-read ./keys.json s3://$S3_BUCKET/keys.json
    

Deploy pod identity webhook

  1. After hosting the service account public signing key and OIDC discovery documents, the applications running in pods can start accessing the desired AWS resources, as long as the pod is mounted with the right service account tokens. This part of configuring the pods with the right service account tokens and env vars is automated by the amazon pod identity webhook . Once the webhook is deployed, it mutates any pods launched using service accounts annotated with eks.amazonaws.com/role-arn

  2. Clone amazon-eks-pod-identity-webhook if not done already.

  3. Set the $KUBECONFIG env var to the path of the EKS Anywhere cluster.

  4. Create my-service-account.yaml with OIDC_IAM_ROLE and other annotations as mentioned in sample below.

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: my-serviceaccount
      namespace: default
      annotations:
        # set this with value of OIDC_IAM_ROLE      
        eks.amazonaws.com/role-arn: "arn:aws:iam::111122223333:role/s3-reader"
        # optional: Defaults to "sts.amazonaws.com" if not set
        eks.amazonaws.com/audience: "sts.amazonaws.com"
        # optional: When set to "true", adds AWS_STS_REGIONAL_ENDPOINTS env var
        #   to containers
        eks.amazonaws.com/sts-regional-endpoints: "true"
        # optional: Defaults to 86400 for expirationSeconds if not set
        #   Note: This value can be overwritten if specified in the pod 
        #         annotation as shown in the next step.
        eks.amazonaws.com/token-expiration: "86400"
    
  5. Run the following command to apply the manifests for the amazon-eks-pod-identity-webhook. The image used here will be pulled from docker.io. Optionally, the image can be imported into (or proxied through) your private registry. Change the IMAGE= argument here to your private registry if needed.

    make cluster-up IMAGE=amazon/amazon-eks-pod-identity-webhook:latest
    
  6. Finally, apply the my-service-account.yaml file to create your service account.

    kubectl apply -f my-service-account.yaml
    
  7. You can validate IRSA by using test steps mentioned here . Ensure awscli pod is deployed in same namespace of ServiceAccount pod-identity-webhook.

Configure the trust relationship for the OIDC provider’s IAM Role

In order to grant certain service accounts access to the desired AWS resources, edit the trust relationship for the OIDC provider’s IAM Role (OIDC_IAM_ROLE) created in the first section, and add in the desired service accounts.

  1. Choose the role in the console to open it for editing.

  2. Choose the Trust relationships tab, and then choose Edit trust relationship.

  3. Find the line that looks similar to the following:

    "$ISSUER_HOSTPATH:aud": "sts.amazonaws.com"
    
  4. Change the line to look like the following line. Replace aud with sub and replace KUBERNETES_SERVICE_ACCOUNT_NAMESPACE and KUBERNETES_SERVICE_ACCOUNT_NAME with the name of your Kubernetes service account and the Kubernetes namespace that the account exists in.

    "$ISSUER_HOSTPATH:sub": "system:serviceaccount:KUBERNETES_SERVICE_ACCOUNT_NAMESPACE:KUBERNETES_SERVICE_ACCOUNT_NAME"
    
  5. Refer this doc for different ways of configuring one or multiple service accounts through the condition operators in the trust relationship.

  6. Choose Update Trust Policy to finish.

6.4 - etcd configuration

EKS Anywhere cluster yaml etcd specification reference

NOTE: Currently, the Unstacked etcd topology is not supported with the Amazon EKS Anywhere Bare Metal and Nutanix deployment options.

There are two types of etcd topologies for configuring a Kubernetes cluster:

  • Stacked: The etcd members and control plane components are colocated (run on the same node/machines)
  • Unstacked/External: With the unstacked or external etcd topology, etcd members have dedicated machines and are not colocated with control plane components

The unstacked etcd topology is recommended for a HA cluster for the following reasons:

  • External etcd topology decouples the control plane components and etcd member. So if a control plane-only node fails, or if there is a memory leak in a component like kube-apiserver, it won’t directly impact an etcd member.
  • Etcd is resource intensive, so it is safer to have dedicated nodes for etcd, since it could use more disk space or higher bandwidth. Having a separate etcd cluster for these reasons could ensure a more resilient HA setup.

EKS Anywhere supports both topologies. In order to configure a cluster with the unstacked/external etcd topology, you need to configure your cluster by updating the configuration file before creating the cluster. This is a generic template with detailed descriptions below for reference:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name
spec:
   clusterNetwork:
      pods:
         cidrBlocks:
            - 192.168.0.0/16
      services:
         cidrBlocks:
            - 10.96.0.0/12
      cniConfig:
         cilium: {}
   controlPlaneConfiguration:
      count: 1
      endpoint:
         host: ""
      machineGroupRef:
         kind: VSphereMachineConfig
         name: my-cluster-name-cp
   datacenterRef:
      kind: VSphereDatacenterConfig
      name: my-cluster-name
   # etcd configuration
   externalEtcdConfiguration:
      count: 3
      machineGroupRef:
        kind: VSphereMachineConfig
        name: my-cluster-name-etcd
   kubernetesVersion: "1.19"
   workerNodeGroupConfigurations:
      - count: 1
        machineGroupRef:
           kind: VSphereMachineConfig
           name: my-cluster-name
        name: md-0

externalEtcdConfiguration (under Cluster)

This field accepts any configuration parameters for running external etcd.

count (required)

This determines the number of etcd members in the cluster. The recommended number is 3.

machineGroupRef (required)

6.5 - AWS IAM Authenticator configuration

EKS Anywhere cluster yaml specification AWS IAM Authenticator reference

AWS IAM Authenticator support (optional)

EKS Anywhere can create clusters that support AWS IAM Authenticator-based api server authentication. In order to add IAM Authenticator support, you need to configure your cluster by updating the configuration file before creating the cluster. This is a generic template with detailed descriptions below for reference:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name
spec:
   ...
   # IAM Authenticator support
   identityProviderRefs:
      - kind: AWSIamConfig
        name: aws-iam-auth-config
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: AWSIamConfig
metadata:
   name: aws-iam-auth-config
spec:
    awsRegion: ""
    backendMode:
        - ""
    mapRoles:
        - roleARN: arn:aws:iam::XXXXXXXXXXXX:role/myRole
          username: myKubernetesUsername
          groups:
          - ""
    mapUsers:
        - userARN: arn:aws:iam::XXXXXXXXXXXX:user/myUser
          username: myKubernetesUsername
          groups:
          - ""
    partition: ""

identityProviderRefs (Under Cluster)

List of identity providers you want configured for the Cluster. This would include a reference to the AWSIamConfig object with the configuration below.

awsRegion (required)

  • Description: awsRegion can be any region in the aws partition that the IAM roles exist in.
  • Type: string

backendMode (required)

  • Description: backendMode configures the IAM authenticator server’s backend mode (i.e. where to source mappings from). We support EKSConfigMap and CRD modes supported by AWS IAM Authenticator, for more details refer to backendMode
  • Type: string
  • Description: When using EKSConfigMap backendMode, we recommend providing either mapRoles or mapUsers to set the IAM role mappings at the time of creation. This input is added to an EKS style ConfigMap. For more details refer to EKS IAM

  • Type: list object

    roleARN, userARN (required)

    • Description: IAM ARN to authenticate to the cluster. roleARN specifies an IAM role and userARN specifies an IAM user.
    • Type: string

    username (required)

    • Description: The Kubernetes username the IAM ARN is mapped to in the cluster. The ARN gets mapped to the Kubernetes cluster permissions associated with the username.
    • Type: string

    groups

    • Description: List of kubernetes user groups that the mapped IAM ARN is given permissions to.
    • Type: list string

partition

  • Description: This field is used to set the aws partition that the IAM roles are present in. Default value is aws.
  • Type: string

6.6 - OIDC configuration

EKS Anywhere cluster yaml specification OIDC reference

OIDC support (optional)

EKS Anywhere can create clusters that support api server OIDC authentication.

In order to add OIDC support, you need to configure your cluster by updating the configuration file to include the details below. The OIDC configuration can be added at cluster creation time, or introduced via a cluster upgrade in VMware and CloudStack.

This is a generic template with detailed descriptions below for reference:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name
spec:
   ...
   # OIDC support
   identityProviderRefs:
      - kind: OIDCConfig
        name: my-cluster-name
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: OIDCConfig
metadata:
   name: my-cluster-name
spec:
    clientId: ""
    groupsClaim: ""
    groupsPrefix: ""
    issuerUrl: "https://x"
    requiredClaims:
      - claim: ""
        value: ""
    usernameClaim: ""
    usernamePrefix: ""

identityProviderRefs (Under Cluster)

List of identity providers you want configured for the Cluster. This would include a reference to the OIDCConfig object with the configuration below.

clientId (required)

  • Description: ClientId defines the client ID for the OpenID Connect client
  • Type: string

groupsClaim (optional)

  • Description: GroupsClaim defines the name of a custom OpenID Connect claim for specifying user groups
  • Type: string

groupsPrefix (optional)

  • Description: GroupsPrefix defines a string to be prefixed to all groups to prevent conflicts with other authentication strategies
  • Type: string

issuerUrl (required)

  • Description: IssuerUrl defines the URL of the OpenID issuer, only HTTPS scheme will be accepted
  • Type: string

requiredClaims (optional)

List of RequiredClaim objects listed below. Only one is supported at this time.

requiredClaims[0] (optional)

  • Description: RequiredClaim defines a key=value pair that describes a required claim in the ID Token
    • claim
      • type: string
    • value
      • type: string
  • Type: object

usernameClaim (optional)

  • Description: UsernameClaim defines the OpenID claim to use as the user name. Note that claims other than the default (‘sub’) is not guaranteed to be unique and immutable
  • Type: string

usernamePrefix (optional)

  • Description: UsernamePrefix defines a string to be prefixed to all usernames. If not provided, username claims other than ‘email’ are prefixed by the issuer URL to avoid clashes. To skip any prefixing, provide the value ‘-’.
  • Type: string

6.7 - GitOpsConfig configuration

Configuration reference for GitOps cluster management.

GitOps Support (Optional)

EKS Anywhere can create clusters that supports GitOps configuration management with Flux. In order to add GitOps support, you need to configure your cluster by specifying the configuration file with gitOpsRef field when creating or upgrading the cluster. We currently support two types of configurations: FluxConfig and GitOpsConfig.

Flux Configuration

The flux configuration spec has three optional fields, regardless of the chosen git provider.

Flux Configuration Spec Details

systemNamespace (optional)

  • Description: Namespace in which to install the gitops components in your cluster. Defaults to flux-system
  • Type: string

clusterConfigPath (optional)

  • Description: The path relative to the root of the git repository where EKS Anywhere will store the cluster configuration files. Defaults to the cluster name
  • Type: string

branch (optional)

  • Description: The branch to use when committing the configuration. Defaults to main
  • Type: string

EKS Anywhere currently supports two git providers for FluxConfig: Github and Git.

Github provider

Please note that for the Flux config to work successfully with the Github provider, the environment variable EKSA_GITHUB_TOKEN needs to be set with a valid GitHub PAT . This is a generic template with detailed descriptions below for reference:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
  namespace: default
spec:
  ...
  #GitOps Support
  gitOpsRef:
    name: my-github-flux-provider
    kind: FluxConfig
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: FluxConfig
metadata:
  name: my-github-flux-provider
  namespace: default
spec:
  systemNamespace: "my-alternative-flux-system-namespace"
  clusterConfigPath: "path-to-my-clusters-config"
  branch: "main"
  github:
    personal: true
    repository: myClusterGitopsRepo
    owner: myGithubUsername

---

github Configuration Spec Details

repository (required)

  • Description: The name of the repository where EKS Anywhere will store your cluster configuration, and sync it to the cluster. If the repository exists, we will clone it from the git provider; if it does not exist, we will create it for you.
  • Type: string

owner (required)

  • Description: The owner of the Github repository; either a Github username or Github organization name. The Personal Access Token used must belong to the owner if this is a personal repository, or have permissions over the organization if this is not a personal repository.
  • Type: string

personal (optional)

  • Description: Is the repository a personal or organization repository? If personal, this value is true; otherwise, false. If using an organizational repository (e.g. personal is false) the owner field will be used as the organization when authenticating to github.com
  • Default: true
  • Type: boolean

Git provider

Before you create a cluster using the Git provider, you will need to set and export the EKSA_GIT_KNOWN_HOSTS and EKSA_GIT_PRIVATE_KEY environment variables.

EKSA_GIT_KNOWN_HOSTS

EKS Anywhere uses the provided known hosts file to verify the identity of the git provider when connecting to it with SSH. The EKSA_GIT_KNOWN_HOSTS environment variable should be a path to a known hosts file containing entries for the git server to which you’ll be connecting.

For example, if you wanted to provide a known hosts file which allows you to connect to and verify the identity of github.com using a private key based on the key algorithm ecdsa, you can use the OpenSSH utility ssh-keyscan to obtain the known host entry used by github.com for the ecdsa key type. EKS Anywhere supports ecdsa, rsa, and ed25519 key types, which can be specified via the sshKeyAlgorithm field of the git provider config.

ssh-keyscan -t ecdsa github.com >> my_eksa_known_hosts

This will produce a file which contains known-hosts entries for the ecdsa key type supported by github.com, mapping the host to the key-type and public key.

github.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmKSENjQEezOmxkZMy7opKgwFB9nkt5YRrYMjNuG5N87uRgg6CLrbo5wAdT/y6v0mKV0U2w0WZ2YB/++Tpockg=

EKS Anywhere will use the content of the file at the path EKSA_GIT_KNOWN_HOSTS to verify the identity of the remote git server, and the provided known hosts file must contain an entry for the remote host and key type.

EKSA_GIT_PRIVATE_KEY

The EKSA_GIT_PRIVATE_KEY environment variable should be a path to the private key file associated with a valid SSH public key registered with your Git provider. This key must have permission to both read from and write to your repository. The key can use the key algorithms rsa, ecdsa, and ed25519.

This key file must have restricted file permissions, allowing only the owner to read and write, such as octal permissions 600.

If your private key file is passphrase protected, you must also set EKSA_GIT_SSH_KEY_PASSPHRASE with that value.

This is a generic template with detailed descriptions below for reference:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
  namespace: default
spec:
  ...
  #GitOps Support
  gitOpsRef:
    name: my-git-flux-provider
    kind: FluxConfig
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: FluxConfig
metadata:
  name: my-git-flux-provider
  namespace: default
spec:
  systemNamespace: "my-alternative-flux-system-namespace"
  clusterConfigPath: "path-to-my-clusters-config"
  branch: "main"
  git:
    repositoryUrl: ssh://git@github.com/myAccount/myClusterGitopsRepo.git
    sshKeyAlgorithm: ecdsa
---

git Configuration Spec Details

repositoryUrl (required)

NOTE: The repositoryUrl value for private SSH repositories is of the format ssh://git@provider.com/$REPO_OWNER/$REPO_NAME.git. This may differ from the default SSH URL given by your provider. For example, the github.com user interface provides an SSH URL containing a : before the repository owner, rather than a /. Make sure to replace this : with a /, if present.

  • Description: The URL of an existing repository where EKS Anywhere will store your cluster configuration and sync it to the cluster. For private repositories, the SSH URL will be of the format ssh://git@provider.com/$REPO_OWNER/$REPO_NAME.git
  • Type: string

sshKeyAlgorithm (optional)

  • Description: The SSH key algorithm of the private key specified via EKSA_PRIVATE_KEY_FILE. Defaults to ecdsa
  • Type: string

Supported SSH key algorithm types are ecdsa, rsa, and ed25519.

Be sure that this SSH key algorithm matches the private key file provided by EKSA_GIT_PRIVATE_KEY_FILE and that the known hosts entry for the key type is present in EKSA_GIT_KNOWN_HOSTS.

GitOps Configuration

Please note that for the GitOps config to work successfully the environment variable EKSA_GITHUB_TOKEN needs to be set with a valid GitHub PAT . This is a generic template with detailed descriptions below for reference:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: my-cluster-name
  namespace: default
spec:
  ...
  #GitOps Support
  gitOpsRef:
    name: my-gitops
    kind: GitOpsConfig
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: GitOpsConfig
metadata:
  name: my-gitops
  namespace: default
spec:
  flux:
    github:
      personal: true
      repository: myClusterGitopsRepo
      owner: myGithubUsername
      fluxSystemNamespace: ""
      clusterConfigPath: ""

GitOps Configuration Spec Details

flux (required)

  • Description: our supported gitops provider is flux. This is the only supported value.
  • Type: object

Flux Configuration Spec Details

github (required)

  • Description: github is the only currently supported git provider. This defines your github configuration to be used by EKS Anywhere and flux.
  • Type: object

github Configuration Spec Details

repository (required)

  • Description: The name of the repository where EKS Anywhere will store your cluster configuration, and sync it to the cluster. If the repository exists, we will clone it from the git provider; if it does not exist, we will create it for you.
  • Type: string

owner (required)

  • Description: The owner of the Github repository; either a Github username or Github organization name. The Personal Access Token used must belong to the owner if this is a personal repository, or have permissions over the organization if this is not a personal repository.
  • Type: string

personal (optional)

  • Description: Is the repository a personal or organization repository? If personal, this value is true; otherwise, false. If using an organizational repository (e.g. personal is false) the owner field will be used as the organization when authenticating to github.com
  • Default: true
  • Type: boolean

clusterConfigPath (optional)

  • Description: The path relative to the root of the git repository where EKS Anywhere will store the cluster configuration files.
  • Default: clusters/$MANAGEMENT_CLUSTER_NAME
  • Type: string

fluxSystemNamespace (optional)

  • Description: Namespace in which to install the gitops components in your cluster.
  • Default: flux-system.
  • Type: string

branch (optional)

  • Description: The branch to use when committing the configuration.
  • Default: main
  • Type: string

6.8 - Proxy configuration

EKS Anywhere cluster yaml specification proxy configuration reference

Proxy support (optional)

You can configure EKS Anywhere to use a proxy to connect to the Internet. This is the generic template with proxy configuration for your reference:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name
spec:
   ...
   proxyConfiguration:
      httpProxy: http-proxy-ip:port
      httpsProxy: https-proxy-ip:port
      noProxy:
      - list of no proxy endpoints

Configuring Docker daemon

EKS Anywhere will proxy for you given the above configuration file. However, to successfully use EKS Anywhere you will also need to ensure your Docker daemon is configured to use the proxy.

This generally means updating your daemon to launch with the HTTPS_PROXY, HTTP_PROXY, and NO_PROXY environment variables.

For an example of how to do this with systemd, please see Docker’s documentation here .

Configuring EKS Anywhere proxy without config file

For commands using a cluster config file, EKS Anywhere will derive its proxy config from the cluster configuration file.

However, for commands that do not utilize a cluster config file, you can set the following environment variables:

export HTTPS_PROXY=https-proxy-ip:port
export HTTP_PROXY=http-proxy-ip:port
export NO_PROXY=no-proxy-domain.com,another-domain.com,localhost

Proxy Configuration Spec Details

proxyConfiguration (required)

  • Description: top level key; required to use proxy.
  • Type: object

httpProxy (required)

  • Description: HTTP proxy to use to connect to the internet; must be in the format IP:port
  • Type: string
  • Example: httpProxy: 192.168.0.1:3218

httpsProxy (required)

  • Description: HTTPS proxy to use to connect to the internet; must be in the format IP:port
  • Type: string
  • Example: httpsProxy: 192.168.0.1:3218

noProxy (optional)

  • Description: list of endpoints that should not be routed through the proxy; can be an IP, CIDR block, or a domain name
  • Type: list of strings
  • Example
  noProxy:
   - localhost
   - 192.168.0.1
   - 192.168.0.0/16
   - .example.com

6.9 - Registry Mirror configuration

EKS Anywhere cluster yaml specification for registry mirror configuration

Registry Mirror Support (optional)

You can configure EKS Anywhere to use a private registry as a mirror for pulling the required images.

The following cluster spec shows an example of how to configure registry mirror:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name
spec:
   ...
  registryMirrorConfiguration:
    endpoint: <private registry IP or hostname>
    port: <private registry port>
    ociNamespaces:
      - registry: <upstream registry IP or hostname>
        namespace: <namespace in private registry>
      ...
    caCertContent: |
      -----BEGIN CERTIFICATE-----
      MIIF1DCCA...
      ...
      es6RXmsCj...
      -----END CERTIFICATE-----        

Registry Mirror Configuration Spec Details

registryMirrorConfiguration (required)

  • Description: top level key; required to use a private registry.
  • Type: object

endpoint (required)

  • Description: IP address or hostname of the private registry for pulling images
  • Type: string
  • Example: endpoint: 192.168.0.1

port (optional)

  • Description: port for the private registry. This is an optional field. If a port is not specified, the default HTTPS port 443 is used
  • Type: string
  • Example: port: 443

ociNamespaces (optional)

  • Description: mapping from upstream registries to the locations of the private registry. When specified, the artifacts pulled from an upstream registry will be put in its corresponding location/namespace in the private registry. The target location/namespace must be already existing.
  • Type: array
  • Example:
    ociNamespaces:
      - registry: "public.ecr.aws"
        namespace: "eks-anywhere"
      - registry: "783794618700.dkr.ecr.us-west-2.amazonaws.com"
        namespace: "curated-packages"
    

caCertContent (optional)

  • Description: certificate Authority (CA) Certificate for the private registry . When using self-signed certificates it is necessary to pass this parameter in the cluster spec. This must be the individual public CA cert used to sign the registry certificate. This will be added to the cluster nodes so that they are able to pull images from the private registry.

    It is also possible to configure CACertContent by exporting an environment variable:
    export EKSA_REGISTRY_MIRROR_CA="/path/to/certificate-file"

  • Type: string

  • Example:

    CACertContent: |
      -----BEGIN CERTIFICATE-----
      MIIF1DCCA...
      ...
      es6RXmsCj...
      -----END CERTIFICATE-----  
    

authenticate (optional)

  • Description: optional field to authenticate with a private registry. When using private registries that require authentication, it is necessary to set this parameter to true in the cluster spec.
  • Type: boolean
  • Example: authenticate: true

To use an authenticated private registry, please also set the following environment variables:

export REGISTRY_USERNAME=<username>
export REGISTRY_PASSWORD=<password>

insecureSkipVerify (optional)

  • Description: optional field to skip the registry certificate verification. Only use this solution for isolated testing or in a tightly controlled, air-gapped environment. Currently only supported for Ubuntu OS.
  • Type: boolean

Import images into a private registry

You can use the download images and import images commands to pull images from public.ecr.aws and push them to your private registry. The copy packages must be used if you want to copy EKS Anywhere Curated Packages to your registry mirror. The download images command also pulls the cilium chart from public.ecr.aws and pushes it to the registry mirror. It requires the registry credentials for performing a login. Set the following environment variables for the login:

export REGISTRY_ENDPOINT=<registry_endpoint>
export REGISTRY_USERNAME=<username>
export REGISTRY_PASSWORD=<password>

Download the EKS Anywhere artifacts to get the EKS Anywhere bundle:

eksctl anywhere download artifacts
tar -xzf eks-anywhere-downloads.tar.gz

Download and import EKS Anywhere images:

eksctl anywhere download images -o eks-anywhere-images.tar
docker login https://${REGISTRY_ENDPOINT}
...
eksctl anywhere import images -i eks-anywhere-images.tar --bundles eks-anywhere-downloads/bundle-release.yaml --registry ${REGISTRY_ENDPOINT}

Use the EKS Anywhere bundle to copy packages:

eksctl anywhere copy packages --bundle ./eks-anywhere-downloads/bundle-release.yaml --dst-cert rootCA.pem ${REGISTRY_ENDPOINT}

Docker configurations

It is necessary to add the private registry’s CA Certificate to the list of CA certificates on the admin machine if your registry uses self-signed certificates.

For Linux , you can place your certificate here: /etc/docker/certs.d/<private-registry-endpoint>/ca.crt

For Mac , you can follow this guide to add the certificate to your keychain: https://docs.docker.com/desktop/mac/#add-tls-certificates

Registry configurations

Depending on what registry you decide to use, you will need to create the following projects:

bottlerocket
eks-anywhere
eks-distro
isovalent
cilium-chart

For example, if a registry is available at private-registry.local, then the following projects will have to be created:

https://private-registry.local/bottlerocket
https://private-registry.local/eks-anywhere
https://private-registry.local/eks-distro
https://private-registry.local/isovalent
https://private-registry.local/cilium-chart

6.10 - Package controller configuration

EKS Anywhere cluster yaml specification for package controller configuration

Package Controller Configuration (optional)

You can configure EKS Anywhere controller configuration.

The following cluster spec shows an example of how to configure registry mirror:

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
   name: my-cluster-name
spec:
   ...
  packages:
    disable: false
    controller:
      resources:
        requests:
          cpu: 100m
          memory: 50Mi
        limits:
          cpu: 750m
          memory: 450Mi


Package Controller Configuration Spec Details

packages (optional)

  • Description: Top level key; required controller configuration.
  • Type: object

packages.disable (optional)

  • Description: Disable the package controller.
  • Type: bool
  • Example: disable: true

packages.controller (optional)

  • Description: Disable the package controller.
  • Type: object

packages.controller.resources (optional)

  • Description: Resources for the package controller.
  • Type: object

packages.controller.resources.limits (optional)

  • Description: Resource limits.
  • Type: object

packages.controller.resources.limits.cpu (optional)

  • Description: CPU limit.
  • Type: string

packages.controller.resources.limits.memory (optional)

  • Description: Memory limit.
  • Type: string

packages.controller.resources.requests (optional)

  • Description: Requested resources.
  • Type: object

packages.controller.resources.requests.cpu (optional)

  • Description: Requested cpu.
  • Type: string

packages.controller.resources.limits.memory (optional)

  • Description: Requested memory.
  • Type: string

packages.cronjob (optional)

  • Description: Disable the package controller.
  • Type: object

packages.cronjob.disable (optional)

  • Description: Disable the cron job.
  • Type: bool
  • Example: disable: true

packages.cronjob.resources (optional)

  • Description: Resources for the package controller.
  • Type: object

packages.cronjob.resources.limits (optional)

  • Description: Resource limits.
  • Type: object

packages.cronjob.resources.limits.cpu (optional)

  • Description: CPU limit.
  • Type: string

packages.cronjob.resources.limits.memory (optional)

  • Description: Memory limit.
  • Type: string

packages.cronjob.resources.requests (optional)

  • Description: Requested resources.
  • Type: object

packages.cronjob.resources.requests.cpu (optional)

  • Description: Requested cpu.
  • Type: string

packages.cronjob.resources.limits.memory (optional)

  • Description: Requested memory.
  • Type: string