This is the multi-page printable view of this section. Click here to print.
Optional configuration
- 1: Autoscaling configuration
- 2: CNI plugin configuration
- 3: IAM Roles for Service Accounts configuration
- 4: etcd configuration
- 5: AWS IAM Authenticator configuration
- 6: OIDC configuration
- 7: GitOpsConfig configuration
- 8: Proxy configuration
- 9: Registry Mirror configuration
- 10: Package controller configuration
1 - Autoscaling configuration
Cluster Autoscaling (Optional)
Cluster Autoscaler configuration in EKS Anywhere cluster spec
EKS Anywhere supports autoscaling worker node groups using the Kubernetes Cluster Autoscaler
’s clusterapi
cloudProvider.
- Configure a worker node group to be picked up by a cluster autoscaler deployment by adding a
autoscalingConfiguration
block to theworkerNodeGroupConfiguration
:apiVersion: anywhere.eks.amazonaws.com/v1alpha1 kind: Cluster metadata: name: my-cluster-name spec: workerNodeGroupConfigurations: - autoscalingConfiguration: minCount: 1 maxCount: 5 machineGroupRef: kind: VSphereMachineConfig name: worker-machine-a name: md-0 - count: 1 autoscalingConfiguration: minCount: 1 maxCount: 3 machineGroupRef: kind: VSphereMachineConfig name: worker-machine-b name: md-1
Note that if no count
is specified it will default to the minCount
value.
EKS Anywhere will automatically apply the following annotations to your MachineDeployment objects:
cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: <minCount>
cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: <maxCount>
After deploying the Kubernetes Cluster Autoscaler from upstream or as a curated package , the deployment will pick up your MachineDeployment and scale the nodes as per your min and max count values.
Cluster Autoscaler Deployment Topologies
The Kubernetes Cluster Autoscaler can only scale a single cluster per deployment.
This means that each cluster you want to scale will need its own cluster autoscaler deployment.
We support three deployment topologies:
- Cluster Autoscaler deployed in the management cluster to autoscale the management cluster itself
- Cluster Autoscaler deployed in the management cluster to autoscale a remote workload cluster
- Cluster Autoscaler deployed in the workload cluster to autoscale the workload cluster itself
If your cluster architecture supports management clusters with resources to run additional workloads, you may want to consider using deployment topologies (1) and (2). Instructions for using this topology can be found here .
If your deployment topology runs small management clusters though, you may want to follow deployment topology (3) and deploy the cluster autoscaler to run in a workload cluster .
2 - CNI plugin configuration
Specifying CNI Plugin in EKS Anywhere cluster spec
EKS Anywhere currently supports two CNI plugins: Cilium and Kindnet. Only one of them can be selected
for a cluster, and the plugin cannot be changed once the cluster is created.
Up until the 0.7.x releases, the plugin had to be specified using the cni
field on cluster spec.
Starting with release 0.8, the plugin should be specified using the new cniConfig
field as follows:
-
For selecting Cilium as the CNI plugin:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1 kind: Cluster metadata: name: my-cluster-name spec: clusterNetwork: pods: cidrBlocks: - 192.168.0.0/16 services: cidrBlocks: - 10.96.0.0/12 cniConfig: cilium: {}
EKS Anywhere selects this as the default plugin when generating a cluster config.
-
Or for selecting Kindnetd as the CNI plugin:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1 kind: Cluster metadata: name: my-cluster-name spec: clusterNetwork: pods: cidrBlocks: - 192.168.0.0/16 services: cidrBlocks: - 10.96.0.0/12 cniConfig: kindnetd: {}
NOTE: EKS Anywhere allows specifying only 1 plugin for a cluster and does not allow switching the plugins after the cluster is created.
Policy Configuration options for Cilium plugin
Cilium accepts policy enforcement modes from the users to determine the allowed traffic between pods.
The allowed values for this mode are: default
, always
and never
.
Please refer the official Cilium documentation
for more details on how each mode affects
the communication within the cluster and choose a mode accordingly.
You can choose to not set this field so that cilium will be launched with the default
mode.
Starting release 0.8, Cilium’s policy enforcement mode can be set through the cluster spec
as follows:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: my-cluster-name
spec:
clusterNetwork:
pods:
cidrBlocks:
- 192.168.0.0/16
services:
cidrBlocks:
- 10.96.0.0/12
cniConfig:
cilium:
policyEnforcementMode: "always"
Please note that if the always
mode is selected, all communication between pods is blocked unless
NetworkPolicy objects allowing communication are created.
In order to ensure that the cluster gets created successfully, EKS Anywhere will create the required
NetworkPolicy objects for all its core components. But it is up to the user to create the NetworkPolicy
objects needed for the user workloads once the cluster is created.
Network policies created by EKS Anywhere for “always” mode
As mentioned above, if Cilium is configured with policyEnforcementMode
set to always
,
EKS Anywhere creates NetworkPolicy objects to enable communication between
its core components. EKS Anywhere will create NetworkPolicy resources in the following namespaces allowing all ingress/egress traffic by default:
- kube-system
- eksa-system
- All core Cluster API namespaces:
- capi-system
- capi-kubeadm-bootstrap-system
- capi-kubeadm-control-plane-system
- etcdadm-bootstrap-provider-system
- etcdadm-controller-system
- cert-manager
- Infrastructure provider’s namespace (for instance, capd-system OR capv-system)
- If Gitops is enabled, then the gitops namespace (flux-system by default)
This is the NetworkPolicy that will be created in these namespaces for the cluster:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-ingress-egress
namespace: test
spec:
podSelector: {}
ingress:
- {}
egress:
- {}
policyTypes:
- Ingress
- Egress
Switching the Cilium policy enforcement mode
The policy enforcement mode for Cilium can be changed as a part of cluster upgrade through the cli upgrade command.
-
Switching to
always
mode: When switching fromdefault
/never
toalways
mode, EKS Anywhere will create the required NetworkPolicy objects for its core components (listed above). This will ensure that the cluster gets upgraded successfully. But it is up to the user to create the NetworkPolicy objects required for the user workloads. -
Switching from
always
mode: When switching fromalways
todefault
mode, EKS Anywhere will not delete any of the existing NetworkPolicy objects, including the ones required for EKS Anywhere components (listed above). The user must delete NetworkPolicy objects as needed.
Node IPs configuration option
Starting with release v0.10, the node-cidr-mask-size
flag
for Kubernetes controller manager (kube-controller-manager) is configurable via the EKS anywhere cluster spec. The clusterNetwork.nodes
being an optional field,
is not generated in the EKS Anywhere spec using generate clusterconfig
command. This block for nodes
will need to be manually added to the cluster spec under the
clusterNetwork
section:
clusterNetwork:
pods:
cidrBlocks:
- 192.168.0.0/16
services:
cidrBlocks:
- 10.96.0.0/12
cniConfig:
cilium: {}
nodes:
cidrMaskSize: 24
If the user does not specify the clusterNetwork.nodes
field in the cluster yaml spec, the value for this flag defaults to 24 for IPv4.
Please note that this mask size needs to be greater than the pods CIDR mask size. In the above spec, the pod CIDR mask size is 16
and the node CIDR mask size is 24
. This ensures the cluster 256 blocks of /24 networks. For example, node1 will get
192.168.0.0/24, node2 will get 192.168.1.0/24, node3 will get 192.168.2.0/24 and so on.
To support more than 256 nodes, the cluster CIDR block needs to be large, and the node CIDR mask size needs to be small, to support that many IPs. For instance, to support 1024 nodes, a user can do any of the following things
- Set the pods cidr blocks to
192.168.0.0/16
and node cidr mask size to 26 - Set the pods cidr blocks to
192.168.0.0/15
and node cidr mask size to 25
Please note that the node-cidr-mask-size
needs to be large enough to accommodate the number of pods you want to run on each node.
A size of 24 will give enough IP addresses for about 250 pods per node, however a size of 26 will only give you about 60 IPs.
This is an immutable field, and the value can’t be updated once the cluster has been created.
3 - IAM Roles for Service Accounts configuration
IAM Role for Service Account on EKS Anywhere clusters with self-hosted signing keys
IAM Roles for Service Account (IRSA) enables applications running in clusters to authenticate with AWS services using IAM roles. The current solution for leveraging this in EKS Anywhere involves creating your own OIDC provider for the cluster, and hosting your cluster’s public service account signing key. The public keys along with the OIDC discovery document should be hosted somewhere that AWS STS can discover it. The steps below assume the keys will be hosted on a publicly accessible S3 bucket. Refer this doc to ensure that the s3 bucket is publicly accessible.
The steps below are based on the guide for configuring IRSA for DIY Kubernetes, with modifications specific to EKS Anywhere’s cluster provisioning workflow. The main modification is the process of generating the keys.json document. As per the original guide, the user has to create the service account signing keys, and then use that to create the keys.json document prior to cluster creation. This order is reversed for EKS Anywhere clusters, so you will create the cluster first, and then retrieve the service account signing key generated by the cluster, and use it to create the keys.json document. The sections below show how to do this in detail.
Create an OIDC provider and make its discovery document publicly accessible
-
Create an s3 bucket to host the public signing keys and OIDC discovery document for your cluster as per this section. Ensure you follow all the steps and save the
$HOSTNAME
and$ISSUER_HOSTPATH
. -
Create the OIDC discovery document as follows:
cat <<EOF > discovery.json { "issuer": "https://$ISSUER_HOSTPATH", "jwks_uri": "https://$ISSUER_HOSTPATH/keys.json", "authorization_endpoint": "urn:kubernetes:programmatic_authorization", "response_types_supported": [ "id_token" ], "subject_types_supported": [ "public" ], "id_token_signing_alg_values_supported": [ "RS256" ], "claims_supported": [ "sub", "iss" ] } EOF
-
Upload it to the publicly accessible S3 bucket:
aws s3 cp --acl public-read ./discovery.json s3://$S3_BUCKET/.well-known/openid-configuration
-
Create an OIDC provider for your cluster. Set the
Provider URL
tohttps://$ISSUER_HOSTPATH
, and audience tosts.amazonaws.com
. -
Note down the
Provider
field of OIDC provider after it is created. -
Assign an IAM role to this OIDC provider.
-
From the IAM console, select and click on the OIDC provider created from above, and click on Assign role at the top right.
-
Select Create a new role.
-
In the Select type of trusted entity section, choose Web identity.
-
In the Choose a web identity provider section:
- For Identity provider, choose the auto selected Identity Provider URL for your cluster.
- For Audience, choose sts.amazonaws.com.
-
Choose Next: Permissions.
-
In the Attach Policy section, select the IAM policy that has the permissions that you want your applications running in the pods to use.
-
Continue with the next sections of adding tags if desired and a suitable name for this role and create the role.
-
Below is a sample trust policy of IAM role for your pods. Remember to replace
Account ID
andISSUER_HOSTPATH
with required values.{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::111122223333:oidc-provider/ISSUER_HOSTPATH" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "__doc_comment": "scope the role to the service account (optional)", "StringEquals": { "ISSUER_HOSTPATH:sub": "system:serviceaccount:default:my-serviceaccount" }, "__doc_comment": "OR scope the role to a namespace (optional)", "StringLike": { "ISSUER_HOSTPATH/CLUSTER_ID:sub": ["system:serviceaccount:default:*","system:serviceaccount:observability:*"] } } } ] }
-
After the role is created, note down the name of this IAM Role as
OIDC_IAM_ROLE
. -
Once the cluster is created, you can create service accounts and grant them this role by editing the trust relationship of this role. You can use
StringLike
condition to add required service accounts, as mentioned in the above sample. Please also refer to section Configure the trust relationship for the OIDC provider’s IAM Role .
-
Create the EKS Anywhere cluster
- When creating the EKS Anywhere cluster, you need to configure the kube-apiserver’s
service-account-issuer
flag so it can issue and mount projected service account tokens in pods. For this, use the value obtained in the first section for$ISSUER_HOSTPATH
as theservice-account-issuer
. Configure the kube-apiserver by setting this value through the EKS Anywhere cluster spec as follows:apiVersion: anywhere.eks.amazonaws.com/v1alpha1 kind: Cluster metadata: name: my-cluster-name spec: podIamConfig: serviceAccountIssuer: https://$ISSUER_HOSTPATH
Set the remaining fields in cluster spec
as required and create the cluster using the eksctl anywhere create cluster
command.
Generate keys.json and make it publicly accessible
-
The cluster provisioning workflow generates a pair of service account signing keys. Retrieve the public signing key generated and used by the cluster, and create a keys.json document containing the public signing key.
git clone https://github.com/aws/amazon-eks-pod-identity-webhook cd amazon-eks-pod-identity-webhook kubectl get secret ${CLUSTER_NAME}-sa -n eksa-system -o jsonpath={.data.tls\\.crt} | base64 --decode > ${CLUSTER_NAME}-sa.pub go run ./hack/self-hosted/main.go -key ${CLUSTER_NAME}-sa.pub | jq '.keys += [.keys[0]] | .keys[1].kid = ""' > keys.json
-
Upload the keys.json document to the s3 bucket.
aws s3 cp --acl public-read ./keys.json s3://$S3_BUCKET/keys.json
Deploy pod identity webhook
-
After hosting the service account public signing key and OIDC discovery documents, the applications running in pods can start accessing the desired AWS resources, as long as the pod is mounted with the right service account tokens. This part of configuring the pods with the right service account tokens and env vars is automated by the amazon pod identity webhook . Once the webhook is deployed, it mutates any pods launched using service accounts annotated with
eks.amazonaws.com/role-arn
-
Clone amazon-eks-pod-identity-webhook if not done already.
-
Set the $KUBECONFIG env var to the path of the EKS Anywhere cluster.
-
Create
my-service-account.yaml
withOIDC_IAM_ROLE
and other annotations as mentioned in sample below.apiVersion: v1 kind: ServiceAccount metadata: name: my-serviceaccount namespace: default annotations: # set this with value of OIDC_IAM_ROLE eks.amazonaws.com/role-arn: "arn:aws:iam::111122223333:role/s3-reader" # optional: Defaults to "sts.amazonaws.com" if not set eks.amazonaws.com/audience: "sts.amazonaws.com" # optional: When set to "true", adds AWS_STS_REGIONAL_ENDPOINTS env var # to containers eks.amazonaws.com/sts-regional-endpoints: "true" # optional: Defaults to 86400 for expirationSeconds if not set # Note: This value can be overwritten if specified in the pod # annotation as shown in the next step. eks.amazonaws.com/token-expiration: "86400"
-
Run the following command to apply the manifests for the amazon-eks-pod-identity-webhook. The image used here will be pulled from docker.io. Optionally, the image can be imported into (or proxied through) your private registry. Change the IMAGE= argument here to your private registry if needed.
make cluster-up IMAGE=amazon/amazon-eks-pod-identity-webhook:latest
-
Finally, apply the
my-service-account.yaml
file to create your service account.kubectl apply -f my-service-account.yaml
-
You can validate IRSA by using test steps mentioned here . Ensure awscli pod is deployed in same namespace of ServiceAccount pod-identity-webhook.
Configure the trust relationship for the OIDC provider’s IAM Role
In order to grant certain service accounts access to the desired AWS resources, edit the trust relationship for the OIDC provider’s IAM Role (OIDC_IAM_ROLE
) created in the first section, and add in the desired service accounts.
-
Choose the role in the console to open it for editing.
-
Choose the Trust relationships tab, and then choose Edit trust relationship.
-
Find the line that looks similar to the following:
"$ISSUER_HOSTPATH:aud": "sts.amazonaws.com"
-
Change the line to look like the following line. Replace
aud
withsub
and replaceKUBERNETES_SERVICE_ACCOUNT_NAMESPACE
andKUBERNETES_SERVICE_ACCOUNT_NAME
with the name of your Kubernetes service account and the Kubernetes namespace that the account exists in."$ISSUER_HOSTPATH:sub": "system:serviceaccount:KUBERNETES_SERVICE_ACCOUNT_NAMESPACE:KUBERNETES_SERVICE_ACCOUNT_NAME"
-
Refer this doc for different ways of configuring one or multiple service accounts through the condition operators in the trust relationship.
-
Choose Update Trust Policy to finish.
4 - etcd configuration
NOTE: Currently, the Unstacked etcd topology is not supported with the Amazon EKS Anywhere Bare Metal and Nutanix deployment options.
Unstacked etcd topology (recommended)
There are two types of etcd topologies for configuring a Kubernetes cluster:
- Stacked: The etcd members and control plane components are colocated (run on the same node/machines)
- Unstacked/External: With the unstacked or external etcd topology, etcd members have dedicated machines and are not colocated with control plane components
The unstacked etcd topology is recommended for a HA cluster for the following reasons:
- External etcd topology decouples the control plane components and etcd member. So if a control plane-only node fails, or if there is a memory leak in a component like kube-apiserver, it won’t directly impact an etcd member.
- Etcd is resource intensive, so it is safer to have dedicated nodes for etcd, since it could use more disk space or higher bandwidth. Having a separate etcd cluster for these reasons could ensure a more resilient HA setup.
EKS Anywhere supports both topologies. In order to configure a cluster with the unstacked/external etcd topology, you need to configure your cluster by updating the configuration file before creating the cluster. This is a generic template with detailed descriptions below for reference:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: my-cluster-name
spec:
clusterNetwork:
pods:
cidrBlocks:
- 192.168.0.0/16
services:
cidrBlocks:
- 10.96.0.0/12
cniConfig:
cilium: {}
controlPlaneConfiguration:
count: 1
endpoint:
host: ""
machineGroupRef:
kind: VSphereMachineConfig
name: my-cluster-name-cp
datacenterRef:
kind: VSphereDatacenterConfig
name: my-cluster-name
# etcd configuration
externalEtcdConfiguration:
count: 3
machineGroupRef:
kind: VSphereMachineConfig
name: my-cluster-name-etcd
kubernetesVersion: "1.19"
workerNodeGroupConfigurations:
- count: 1
machineGroupRef:
kind: VSphereMachineConfig
name: my-cluster-name
name: md-0
externalEtcdConfiguration (under Cluster)
This field accepts any configuration parameters for running external etcd.
count (required)
This determines the number of etcd members in the cluster. The recommended number is 3.
machineGroupRef (required)
5 - AWS IAM Authenticator configuration
AWS IAM Authenticator support (optional)
EKS Anywhere can create clusters that support AWS IAM Authenticator-based api server authentication. In order to add IAM Authenticator support, you need to configure your cluster by updating the configuration file before creating the cluster. This is a generic template with detailed descriptions below for reference:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: my-cluster-name
spec:
...
# IAM Authenticator support
identityProviderRefs:
- kind: AWSIamConfig
name: aws-iam-auth-config
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: AWSIamConfig
metadata:
name: aws-iam-auth-config
spec:
awsRegion: ""
backendMode:
- ""
mapRoles:
- roleARN: arn:aws:iam::XXXXXXXXXXXX:role/myRole
username: myKubernetesUsername
groups:
- ""
mapUsers:
- userARN: arn:aws:iam::XXXXXXXXXXXX:user/myUser
username: myKubernetesUsername
groups:
- ""
partition: ""
identityProviderRefs (Under Cluster)
List of identity providers you want configured for the Cluster.
This would include a reference to the AWSIamConfig
object with the configuration below.
awsRegion (required)
- Description: awsRegion can be any region in the aws partition that the IAM roles exist in.
- Type: string
backendMode (required)
- Description: backendMode configures the IAM authenticator server’s backend mode (i.e. where to source mappings from). We support EKSConfigMap and CRD modes supported by AWS IAM Authenticator, for more details refer to backendMode
- Type: string
mapRoles, mapUsers (recommended for EKSConfigMap
backend)
-
Description: When using
EKSConfigMap
backendMode
, we recommend providing eithermapRoles
ormapUsers
to set the IAM role mappings at the time of creation. This input is added to an EKS style ConfigMap. For more details refer to EKS IAM -
Type: list object
roleARN, userARN (required)
- Description: IAM ARN to authenticate to the cluster.
roleARN
specifies an IAM role anduserARN
specifies an IAM user. - Type: string
username (required)
- Description: The Kubernetes username the IAM ARN is mapped to in the cluster. The ARN gets mapped to the Kubernetes cluster permissions associated with the username.
- Type: string
groups
- Description: List of kubernetes user groups that the mapped IAM ARN is given permissions to.
- Type: list string
- Description: IAM ARN to authenticate to the cluster.
partition
- Description: This field is used to set the aws partition that the IAM roles are present in. Default value is
aws
. - Type: string
6 - OIDC configuration
OIDC support (optional)
EKS Anywhere can create clusters that support api server OIDC authentication.
In order to add OIDC support, you need to configure your cluster by updating the configuration file to include the details below. The OIDC configuration can be added at cluster creation time, or introduced via a cluster upgrade in VMware and CloudStack.
This is a generic template with detailed descriptions below for reference:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: my-cluster-name
spec:
...
# OIDC support
identityProviderRefs:
- kind: OIDCConfig
name: my-cluster-name
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: OIDCConfig
metadata:
name: my-cluster-name
spec:
clientId: ""
groupsClaim: ""
groupsPrefix: ""
issuerUrl: "https://x"
requiredClaims:
- claim: ""
value: ""
usernameClaim: ""
usernamePrefix: ""
identityProviderRefs (Under Cluster)
List of identity providers you want configured for the Cluster.
This would include a reference to the OIDCConfig
object with the configuration below.
clientId (required)
- Description: ClientId defines the client ID for the OpenID Connect client
- Type: string
groupsClaim (optional)
- Description: GroupsClaim defines the name of a custom OpenID Connect claim for specifying user groups
- Type: string
groupsPrefix (optional)
- Description: GroupsPrefix defines a string to be prefixed to all groups to prevent conflicts with other authentication strategies
- Type: string
issuerUrl (required)
- Description: IssuerUrl defines the URL of the OpenID issuer, only HTTPS scheme will be accepted
- Type: string
requiredClaims (optional)
List of RequiredClaim objects listed below. Only one is supported at this time.
requiredClaims[0] (optional)
- Description: RequiredClaim defines a key=value pair that describes a required claim in the ID Token
- claim
- type: string
- value
- type: string
- claim
- Type: object
usernameClaim (optional)
- Description: UsernameClaim defines the OpenID claim to use as the user name. Note that claims other than the default (‘sub’) is not guaranteed to be unique and immutable
- Type: string
usernamePrefix (optional)
- Description: UsernamePrefix defines a string to be prefixed to all usernames. If not provided, username claims other than ‘email’ are prefixed by the issuer URL to avoid clashes. To skip any prefixing, provide the value ‘-’.
- Type: string
7 - GitOpsConfig configuration
GitOps Support (Optional)
EKS Anywhere can create clusters that supports GitOps configuration management with Flux.
In order to add GitOps support, you need to configure your cluster by specifying the configuration file with gitOpsRef
field when creating or upgrading the cluster.
We currently support two types of configurations: FluxConfig
and GitOpsConfig
.
Flux Configuration
The flux configuration spec has three optional fields, regardless of the chosen git provider.
Flux Configuration Spec Details
systemNamespace (optional)
- Description: Namespace in which to install the gitops components in your cluster. Defaults to
flux-system
- Type: string
clusterConfigPath (optional)
- Description: The path relative to the root of the git repository where EKS Anywhere will store the cluster configuration files. Defaults to the cluster name
- Type: string
branch (optional)
- Description: The branch to use when committing the configuration. Defaults to
main
- Type: string
EKS Anywhere currently supports two git providers for FluxConfig: Github and Git.
Github provider
Please note that for the Flux config to work successfully with the Github provider, the environment variable EKSA_GITHUB_TOKEN
needs to be set with a valid GitHub PAT
.
This is a generic template with detailed descriptions below for reference:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: my-cluster-name
namespace: default
spec:
...
#GitOps Support
gitOpsRef:
name: my-github-flux-provider
kind: FluxConfig
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: FluxConfig
metadata:
name: my-github-flux-provider
namespace: default
spec:
systemNamespace: "my-alternative-flux-system-namespace"
clusterConfigPath: "path-to-my-clusters-config"
branch: "main"
github:
personal: true
repository: myClusterGitopsRepo
owner: myGithubUsername
---
github Configuration Spec Details
repository (required)
- Description: The name of the repository where EKS Anywhere will store your cluster configuration, and sync it to the cluster. If the repository exists, we will clone it from the git provider; if it does not exist, we will create it for you.
- Type: string
owner (required)
- Description: The owner of the Github repository; either a Github username or Github organization name. The Personal Access Token used must belong to the owner if this is a personal repository, or have permissions over the organization if this is not a personal repository.
- Type: string
personal (optional)
- Description: Is the repository a personal or organization repository?
If personal, this value is
true
; otherwise,false
. If using an organizational repository (e.g.personal
isfalse
) theowner
field will be used as theorganization
when authenticating to github.com - Default: true
- Type: boolean
Git provider
Before you create a cluster using the Git provider, you will need to set and export the EKSA_GIT_KNOWN_HOSTS
and EKSA_GIT_PRIVATE_KEY
environment variables.
EKSA_GIT_KNOWN_HOSTS
EKS Anywhere uses the provided known hosts file to verify the identity of the git provider when connecting to it with SSH.
The EKSA_GIT_KNOWN_HOSTS
environment variable should be a path to a known hosts file containing entries for the git server to which you’ll be connecting.
For example, if you wanted to provide a known hosts file which allows you to connect to and verify the identity of github.com using a private key based on the key algorithm ecdsa, you can use the OpenSSH utility ssh-keyscan
to obtain the known host entry used by github.com for the ecdsa
key type.
EKS Anywhere supports ecdsa
, rsa
, and ed25519
key types, which can be specified via the sshKeyAlgorithm
field of the git provider config.
ssh-keyscan -t ecdsa github.com >> my_eksa_known_hosts
This will produce a file which contains known-hosts entries for the ecdsa
key type supported by github.com, mapping the host to the key-type and public key.
github.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmKSENjQEezOmxkZMy7opKgwFB9nkt5YRrYMjNuG5N87uRgg6CLrbo5wAdT/y6v0mKV0U2w0WZ2YB/++Tpockg=
EKS Anywhere will use the content of the file at the path EKSA_GIT_KNOWN_HOSTS
to verify the identity of the remote git server, and the provided known hosts file must contain an entry for the remote host and key type.
EKSA_GIT_PRIVATE_KEY
The EKSA_GIT_PRIVATE_KEY
environment variable should be a path to the private key file associated with a valid SSH public key registered with your Git provider.
This key must have permission to both read from and write to your repository.
The key can use the key algorithms rsa
, ecdsa
, and ed25519
.
This key file must have restricted file permissions, allowing only the owner to read and write, such as octal permissions 600
.
If your private key file is passphrase protected, you must also set EKSA_GIT_SSH_KEY_PASSPHRASE
with that value.
This is a generic template with detailed descriptions below for reference:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: my-cluster-name
namespace: default
spec:
...
#GitOps Support
gitOpsRef:
name: my-git-flux-provider
kind: FluxConfig
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: FluxConfig
metadata:
name: my-git-flux-provider
namespace: default
spec:
systemNamespace: "my-alternative-flux-system-namespace"
clusterConfigPath: "path-to-my-clusters-config"
branch: "main"
git:
repositoryUrl: ssh://git@github.com/myAccount/myClusterGitopsRepo.git
sshKeyAlgorithm: ecdsa
---
git Configuration Spec Details
repositoryUrl (required)
NOTE: The
repositoryUrl
value for private SSH repositories is of the formatssh://git@provider.com/$REPO_OWNER/$REPO_NAME.git
. This may differ from the default SSH URL given by your provider. For example, the github.com user interface provides an SSH URL containing a:
before the repository owner, rather than a/
. Make sure to replace this:
with a/
, if present.
- Description: The URL of an existing repository where EKS Anywhere will store your cluster configuration and sync it to the cluster. For private repositories, the SSH URL will be of the format
ssh://git@provider.com/$REPO_OWNER/$REPO_NAME.git
- Type: string
sshKeyAlgorithm (optional)
- Description: The SSH key algorithm of the private key specified via
EKSA_PRIVATE_KEY_FILE
. Defaults toecdsa
- Type: string
Supported SSH key algorithm types are ecdsa
, rsa
, and ed25519
.
Be sure that this SSH key algorithm matches the private key file provided by EKSA_GIT_PRIVATE_KEY_FILE
and that the known hosts entry for the key type is present in EKSA_GIT_KNOWN_HOSTS
.
GitOps Configuration
Warning
GitOps Config will be deprecated in v0.11.0 in lieu of using the Flux Config described above.Please note that for the GitOps config to work successfully the environment variable EKSA_GITHUB_TOKEN
needs to be set with a valid GitHub PAT
. This is a generic template with detailed descriptions below for reference:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: my-cluster-name
namespace: default
spec:
...
#GitOps Support
gitOpsRef:
name: my-gitops
kind: GitOpsConfig
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: GitOpsConfig
metadata:
name: my-gitops
namespace: default
spec:
flux:
github:
personal: true
repository: myClusterGitopsRepo
owner: myGithubUsername
fluxSystemNamespace: ""
clusterConfigPath: ""
GitOps Configuration Spec Details
flux (required)
- Description: our supported gitops provider is
flux
. This is the only supported value. - Type: object
Flux Configuration Spec Details
github (required)
- Description:
github
is the only currently supported git provider. This defines your github configuration to be used by EKS Anywhere and flux. - Type: object
github Configuration Spec Details
repository (required)
- Description: The name of the repository where EKS Anywhere will store your cluster configuration, and sync it to the cluster. If the repository exists, we will clone it from the git provider; if it does not exist, we will create it for you.
- Type: string
owner (required)
- Description: The owner of the Github repository; either a Github username or Github organization name.
The Personal Access Token used must belong to the
owner
if this is apersonal
repository, or have permissions over the organization if this is not apersonal
repository. - Type: string
personal (optional)
- Description: Is the repository a personal or organization repository?
If personal, this value is
true
; otherwise,false
. If using an organizational repository (e.g.personal
isfalse
) theowner
field will be used as theorganization
when authenticating to github.com - Default:
true
- Type: boolean
clusterConfigPath (optional)
- Description: The path relative to the root of the git repository where EKS Anywhere will store the cluster configuration files.
- Default:
clusters/$MANAGEMENT_CLUSTER_NAME
- Type: string
fluxSystemNamespace (optional)
- Description: Namespace in which to install the gitops components in your cluster.
- Default:
flux-system
. - Type: string
branch (optional)
- Description: The branch to use when committing the configuration.
- Default:
main
- Type: string
8 - Proxy configuration
Proxy support (optional)
You can configure EKS Anywhere to use a proxy to connect to the Internet. This is the generic template with proxy configuration for your reference:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: my-cluster-name
spec:
...
proxyConfiguration:
httpProxy: http-proxy-ip:port
httpsProxy: https-proxy-ip:port
noProxy:
- list of no proxy endpoints
Configuring Docker daemon
EKS Anywhere will proxy for you given the above configuration file. However, to successfully use EKS Anywhere you will also need to ensure your Docker daemon is configured to use the proxy.
This generally means updating your daemon to launch with the HTTPS_PROXY, HTTP_PROXY, and NO_PROXY environment variables.
For an example of how to do this with systemd, please see Docker’s documentation here .
Configuring EKS Anywhere proxy without config file
For commands using a cluster config file, EKS Anywhere will derive its proxy config from the cluster configuration file.
However, for commands that do not utilize a cluster config file, you can set the following environment variables:
export HTTPS_PROXY=https-proxy-ip:port
export HTTP_PROXY=http-proxy-ip:port
export NO_PROXY=no-proxy-domain.com,another-domain.com,localhost
Proxy Configuration Spec Details
proxyConfiguration (required)
- Description: top level key; required to use proxy.
- Type: object
httpProxy (required)
- Description: HTTP proxy to use to connect to the internet; must be in the format IP:port
- Type: string
- Example:
httpProxy: 192.168.0.1:3218
httpsProxy (required)
- Description: HTTPS proxy to use to connect to the internet; must be in the format IP:port
- Type: string
- Example:
httpsProxy: 192.168.0.1:3218
noProxy (optional)
- Description: list of endpoints that should not be routed through the proxy; can be an IP, CIDR block, or a domain name
- Type: list of strings
- Example
noProxy:
- localhost
- 192.168.0.1
- 192.168.0.0/16
- .example.com
9 - Registry Mirror configuration
Registry Mirror Support (optional)
You can configure EKS Anywhere to use a private registry as a mirror for pulling the required images.
The following cluster spec shows an example of how to configure registry mirror:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: my-cluster-name
spec:
...
registryMirrorConfiguration:
endpoint: <private registry IP or hostname>
port: <private registry port>
ociNamespaces:
- registry: <upstream registry IP or hostname>
namespace: <namespace in private registry>
...
caCertContent: |
-----BEGIN CERTIFICATE-----
MIIF1DCCA...
...
es6RXmsCj...
-----END CERTIFICATE-----
Registry Mirror Configuration Spec Details
registryMirrorConfiguration (required)
- Description: top level key; required to use a private registry.
- Type: object
endpoint (required)
- Description: IP address or hostname of the private registry for pulling images
- Type: string
- Example:
endpoint: 192.168.0.1
port (optional)
- Description: port for the private registry. This is an optional field. If a port
is not specified, the default HTTPS port
443
is used - Type: string
- Example:
port: 443
ociNamespaces (optional)
- Description: mapping from upstream registries to the locations of the private registry. When specified, the artifacts pulled from an upstream registry will be put in its corresponding location/namespace in the private registry. The target location/namespace must be already existing.
- Type: array
- Example:
ociNamespaces: - registry: "public.ecr.aws" namespace: "eks-anywhere" - registry: "783794618700.dkr.ecr.us-west-2.amazonaws.com" namespace: "curated-packages"
caCertContent (optional)
-
Description: certificate Authority (CA) Certificate for the private registry . When using self-signed certificates it is necessary to pass this parameter in the cluster spec. This must be the individual public CA cert used to sign the registry certificate. This will be added to the cluster nodes so that they are able to pull images from the private registry.
It is also possible to configure CACertContent by exporting an environment variable:
export EKSA_REGISTRY_MIRROR_CA="/path/to/certificate-file"
-
Type: string
-
Example:
CACertContent: | -----BEGIN CERTIFICATE----- MIIF1DCCA... ... es6RXmsCj... -----END CERTIFICATE-----
authenticate (optional)
- Description: optional field to authenticate with a private registry. When using private registries that
require authentication, it is necessary to set this parameter to
true
in the cluster spec. - Type: boolean
- Example:
authenticate: true
To use an authenticated private registry, please also set the following environment variables:
export REGISTRY_USERNAME=<username>
export REGISTRY_PASSWORD=<password>
insecureSkipVerify (optional)
- Description: optional field to skip the registry certificate verification. Only use this solution for isolated testing or in a tightly controlled, air-gapped environment. Currently only supported for Ubuntu OS.
- Type: boolean
Import images into a private registry
You can use the download images
and import images
commands to pull images from public.ecr.aws
and push them to your
private registry.
The copy packages
must be used if you want to copy EKS Anywhere Curated Packages to your registry mirror.
The download images
command also pulls the cilium chart from public.ecr.aws
and pushes it to the registry mirror. It requires the registry credentials for performing a login. Set the following environment variables for the login:
export REGISTRY_ENDPOINT=<registry_endpoint>
export REGISTRY_USERNAME=<username>
export REGISTRY_PASSWORD=<password>
Download the EKS Anywhere artifacts to get the EKS Anywhere bundle:
eksctl anywhere download artifacts
tar -xzf eks-anywhere-downloads.tar.gz
Download and import EKS Anywhere images:
eksctl anywhere download images -o eks-anywhere-images.tar
docker login https://${REGISTRY_ENDPOINT}
...
eksctl anywhere import images -i eks-anywhere-images.tar --bundles eks-anywhere-downloads/bundle-release.yaml --registry ${REGISTRY_ENDPOINT}
Use the EKS Anywhere bundle to copy packages:
eksctl anywhere copy packages --bundle ./eks-anywhere-downloads/bundle-release.yaml --dst-cert rootCA.pem ${REGISTRY_ENDPOINT}
Docker configurations
It is necessary to add the private registry’s CA Certificate to the list of CA certificates on the admin machine if your registry uses self-signed certificates.
For Linux
, you can place your certificate here: /etc/docker/certs.d/<private-registry-endpoint>/ca.crt
For Mac , you can follow this guide to add the certificate to your keychain: https://docs.docker.com/desktop/mac/#add-tls-certificates
Note
You may need to restart Docker after adding the certificates.Registry configurations
Depending on what registry you decide to use, you will need to create the following projects:
bottlerocket
eks-anywhere
eks-distro
isovalent
cilium-chart
For example, if a registry is available at private-registry.local
, then the following
projects will have to be created:
https://private-registry.local/bottlerocket
https://private-registry.local/eks-anywhere
https://private-registry.local/eks-distro
https://private-registry.local/isovalent
https://private-registry.local/cilium-chart
10 - Package controller configuration
Package Controller Configuration (optional)
You can configure EKS Anywhere controller configuration.
The following cluster spec shows an example of how to configure registry mirror:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: my-cluster-name
spec:
...
packages:
disable: false
controller:
resources:
requests:
cpu: 100m
memory: 50Mi
limits:
cpu: 750m
memory: 450Mi
Package Controller Configuration Spec Details
packages (optional)
- Description: Top level key; required controller configuration.
- Type: object
packages.disable (optional)
- Description: Disable the package controller.
- Type: bool
- Example:
disable: true
packages.controller (optional)
- Description: Disable the package controller.
- Type: object
packages.controller.resources (optional)
- Description: Resources for the package controller.
- Type: object
packages.controller.resources.limits (optional)
- Description: Resource limits.
- Type: object
packages.controller.resources.limits.cpu (optional)
- Description: CPU limit.
- Type: string
packages.controller.resources.limits.memory (optional)
- Description: Memory limit.
- Type: string
packages.controller.resources.requests (optional)
- Description: Requested resources.
- Type: object
packages.controller.resources.requests.cpu (optional)
- Description: Requested cpu.
- Type: string
packages.controller.resources.limits.memory (optional)
- Description: Requested memory.
- Type: string
packages.cronjob (optional)
- Description: Disable the package controller.
- Type: object
packages.cronjob.disable (optional)
- Description: Disable the cron job.
- Type: bool
- Example:
disable: true
packages.cronjob.resources (optional)
- Description: Resources for the package controller.
- Type: object
packages.cronjob.resources.limits (optional)
- Description: Resource limits.
- Type: object
packages.cronjob.resources.limits.cpu (optional)
- Description: CPU limit.
- Type: string
packages.cronjob.resources.limits.memory (optional)
- Description: Memory limit.
- Type: string
packages.cronjob.resources.requests (optional)
- Description: Requested resources.
- Type: object
packages.cronjob.resources.requests.cpu (optional)
- Description: Requested cpu.
- Type: string
packages.cronjob.resources.limits.memory (optional)
- Description: Requested memory.
- Type: string