This is the multi-page printable view of this section.
Click here to print.
Return to the regular view of this page.
EKS Anywhere curated package management
Common tasks for managing curated packages.
The main goal of EKS Anywhere curated packages is to make it easy to install, configure and maintain operational components in an EKS Anywhere cluster. EKS Anywhere curated packages offers to run secure and tested operational components on EKS Anywhere clusters. Please check out EKS Anywhere curated packages concepts
and EKS Anywhere curated packages configurations
for more details.
For proper curated package support, make sure the cluster kubernetes
version is v1.21
or above and eksctl anywhere
version is v0.11.0
or above (can be checked with the eksctl anywhere version
command). Amazon EKS Anywhere Curated Packages are only available to customers with the Amazon EKS Anywhere Enterprise Subscription. To request a free trial, talk to your Amazon representative or connect with one here
.
Setup authentication to use curated-packages
When you have been notified that your account has been given access to curated packages, create an IAM user in your account with a policy that only allows ECR read access to the Curated Packages repository; similar to this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ECRRead",
"Effect": "Allow",
"Action": [
"ecr:DescribeImageScanFindings",
"ecr:GetDownloadUrlForLayer",
"ecr:DescribeRegistry",
"ecr:DescribePullThroughCacheRules",
"ecr:DescribeImageReplicationStatus",
"ecr:ListTagsForResource",
"ecr:ListImages",
"ecr:BatchGetImage",
"ecr:DescribeImages",
"ecr:DescribeRepositories",
"ecr:BatchCheckLayerAvailability"
],
"Resource": "arn:aws:ecr:*:783794618700:repository/*"
},
{
"Sid": "ECRLogin",
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken"
],
"Resource": "*"
}
]
}
Note Curated Packages now supports pulling images from the following regions. Use the corresponding EKSA_AWS_REGION
prior to cluster creation to choose which region to pull form, if not set it will default to pull from us-west-2
.
"us-east-2",
"us-east-1",
"us-west-1",
"us-west-2",
"ap-northeast-3",
"ap-northeast-2",
"ap-southeast-1",
"ap-southeast-2",
"ap-northeast-1",
"ca-central-1",
"eu-central-1",
"eu-west-1",
"eu-west-2",
"eu-west-3",
"eu-north-1",
"sa-east-1"
Create credentials for this user and set and export the following environment variables:
export EKSA_AWS_ACCESS_KEY_ID="your*access*id"
export EKSA_AWS_SECRET_ACCESS_KEY="your*secret*key"
export EKSA_AWS_REGION="us-west-2"
Make sure you are authenticated with the AWS CLI
export AWS_ACCESS_KEY_ID="your*access*id"
export AWS_SECRET_ACCESS_KEY="your*secret*key"
aws sts get-caller-identity
Login to docker
aws ecr get-login-password --region us-west-2 |docker login --username AWS --password-stdin 783794618700.dkr.ecr.us-west-2.amazonaws.com
Verify you can pull an image
docker pull 783794618700.dkr.ecr.us-west-2.amazonaws.com/emissary-ingress/emissary:v3.0.0-9ded128b4606165b41aca52271abe7fa44fa7109
If the image downloads successfully, it worked!
Discover curated packages
You can get a list of the available packages from the command line:
export CLUSTER_NAME=nameofyourcluster
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
eksctl anywhere list packages --kube-version 1.23
Example command output:
Package Version(s)
------- ----------
hello-eks-anywhere 0.1.2-a6847010915747a9fc8a412b233a2b1ee608ae76
adot 0.25.0-c26690f90d38811dbb0e3dad5aea77d1efa52c7b
cert-manager 1.9.1-dc0c845b5f71bea6869efccd3ca3f2dd11b5c95f
cluster-autoscaler 9.21.0-1.23-5516c0368ff74d14c328d61fe374da9787ecf437
harbor 2.5.1-ee7e5a6898b6c35668a1c5789aa0d654fad6c913
metallb 0.13.7-758df43f8c5a3c2ac693365d06e7b0feba87efd5
metallb-crds 0.13.7-758df43f8c5a3c2ac693365d06e7b0feba87efd5
metrics-server 0.6.1-eks-1-23-6-c94ed410f56421659f554f13b4af7a877da72bc1
emissary 3.3.0-cbf71de34d8bb5a72083f497d599da63e8b3837b
emissary-crds 3.3.0-cbf71de34d8bb5a72083f497d599da63e8b3837b
prometheus 2.41.0-b53c8be243a6cc3ac2553de24ab9f726d9b851ca
Generate a curated-packages config
The example shows how to install the harbor
package from the curated package list
.
export CLUSTER_NAME=nameofyourcluster
eksctl anywhere generate package harbor --cluster ${CLUSTER_NAME} --kube-version 1.23 > packages.yaml
Available curated packages and troubleshooting guides are listed below.
Install package controller after installation
If you created a cluster without the package controller or if the package controller was not properly configured, you may need to do some things to enable it.
Make sure you are authenticated with the AWS CLI. Use the credentials you set up for packages. These credentials should have limited capabilities
:
export AWS_ACCESS_KEY_ID="your*access*id"
export AWS_SECRET_ACCESS_KEY="your*secret*key"
export EKSA_AWS_ACCESS_KEY_ID="your*access*id"
export EKSA_AWS_SECRET_ACCESS_KEY="your*secret*key"
Verify your credentials are working:
aws sts get-caller-identity
Login to docker
aws ecr get-login-password |docker login --username AWS --password-stdin 783794618700.dkr.ecr.us-west-2.amazonaws.com
Verify you can pull an image
docker pull 783794618700.dkr.ecr.us-west-2.amazonaws.com/emissary-ingress/emissary:v3.0.0-9ded128b4606165b41aca52271abe7fa44fa7109
If the image downloads successfully, it worked!
If you do not have the package controller installed (it is installed by default), install it now:
eksctl anywhere install packagecontroller -f cluster.yaml
If you had the package controller disabled, you may need to modify your cluster.yaml
to enable it.
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: billy
spec:
packages:
disable: false
You may need to create or update your credentials which you can do with a command like this. Set the environment variables to the proper values before running the command.
kubectl delete secret -n eksa-packages aws-secret
kubectl create secret -n eksa-packages generic aws-secret \
--from-literal=AWS_ACCESS_KEY_ID=${EKSA_AWS_ACCESS_KEY_ID} \
--from-literal=AWS_SECRET_ACCESS_KEY=${EKSA_AWS_SECRET_ACCESS_KEY} \
--from-literal=REGION=${EKSA_AWS_REGION}
If you recreate secrets, you can manually re-enable the cronjob and run the job to update the image pull secrets:
kubectl get cronjob -n eksa-packages cron-ecr-renew -o yaml | yq e '.spec.suspend |= false' - | kubectl apply -f -
kubectl create job -n eksa-packages --from=cronjob/cron-ecr-renew run-it-now
Upgrade the packages controller
Starting with EKS-A v0.15.0 (packages controller v0.3.9+) the package controller will upgrade automatically according to the selected bundle. For any version prior to v0.3.X,
manual steps must be executed to upgrade.
- Ensure the namespace will be kept
kubectl annotate namespaces eksa-packages helm.sh/resource-policy=keep
- Uninstall the eks-anywhere-package helm release
helm uninstall eks-anywhere-packages
- Remove the secret called aws-secret (we will need credentials when installing the new version)
kubectl delete secret -n eksa-package aws-secret
- Install the new version using the latest eksctl-anywhere binary
eksctl anywhere install packagecontroller -f ${CLUSTER_NAME}.yaml
2 - Curated Packages Troubleshooting
Troubleshooting specific to curated packages
General debugging
The major component of Curated Packages is the package controller. If the container is not running or not running correctly, packages will not be installed. Generally it should be debugged like any other Kubernetes application. The first step is to check that the pod is running.
kubectl get pods -n eksa-packages
You should see at least two pods with running and one or more refresher completed.
NAME READY STATUS RESTARTS AGE
eks-anywhere-packages-69d7bb9dd9-9d47l 1/1 Running 0 14s
eksa-auth-refresher-w82nm 0/1 Completed 0 10s
The describe command might help to get more detail on why there is a problem:
kubectl describe pods -n eksa-packages
Logs of the controller can be seen in a normal Kubernetes fashion:
kubectl logs deploy/eks-anywhere-packages -n eksa-packages controller
To get the general state of the package controller, run the following command:
kubectl get packages,packagebundles,packagebundlecontrollers -A
You should see an active packagebundlecontroller and an available bundle. The packagebundlecontroller should indicate the active bundle. It may take a few minutes to download and activate the latest bundle. The state of the package in this example is installing and there is an error downloading the chart.
NAMESPACE NAME PACKAGE AGE STATE CURRENTVERSION TARGETVERSION DETAIL
eksa-packages-sammy package.packages.eks.amazonaws.com/my-hello hello-eks-anywhere 42h installed 0.1.1-bc7dc6bb874632972cd92a2bca429a846f7aa785 0.1.1-bc7dc6bb874632972cd92a2bca429a846f7aa785 (latest)
eksa-packages-tlhowe package.packages.eks.amazonaws.com/my-hello hello-eks-anywhere 44h installed 0.1.1-083e68edbbc62ca0228a5669e89e4d3da99ff73b 0.1.1-083e68edbbc62ca0228a5669e89e4d3da99ff73b (latest)
NAMESPACE NAME STATE
eksa-packages packagebundle.packages.eks.amazonaws.com/v1-21-83 available
eksa-packages packagebundle.packages.eks.amazonaws.com/v1-23-70 available
eksa-packages packagebundle.packages.eks.amazonaws.com/v1-23-81 available
eksa-packages packagebundle.packages.eks.amazonaws.com/v1-23-82 available
eksa-packages packagebundle.packages.eks.amazonaws.com/v1-23-83 available
NAMESPACE NAME ACTIVEBUNDLE STATE DETAIL
eksa-packages packagebundlecontroller.packages.eks.amazonaws.com/sammy v1-23-70 upgrade available v1-23-83 available
eksa-packages packagebundlecontroller.packages.eks.amazonaws.com/tlhowe v1-21-83 active active
Package controller not running
If you do not see a pod or various resources for the package controller, it may be that it is not installed.
No resources found in eksa-packages namespace.
Most likely the cluster was created with an older version of the EKS Anywhere CLI. Curated packages became generally available with v0.11.0
. Use the eksctl anywhere version
command to verify you are running a new enough release and you can use the eksctl anywhere install packagecontroller
command to install the package controller on an older release.
Error: this command is currently not supported
Error: this command is currently not supported
Curated packages became generally available with version v0.11.0
. Use the version command to make sure you are running version v0.11.0
or later:
Error: cert-manager is not present in the cluster
Error: curated packages cannot be installed as cert-manager is not present in the cluster
This is most likely caused by an action to install curated packages at a workload cluster with eksctl anywhere
version older than v0.12.0
. In order to use packages on workload clusters, please upgrade eksctl anywhere
version to v0.12+
. The package manager will remotely manage packages on the workload cluster from the management cluster.
Package registry authentication
Error: ImagePullBackOff on Package
If a package fails to start with ImagePullBackOff:
NAME READY STATUS RESTARTS AGE
generated-harbor-jobservice-564d6fdc87 0/1 ImagePullBackOff 0 2d23h
If a package pod cannot pull images, you may not have your AWS credentials set up properly. Verify that your credentials are working properly.
Make sure you are authenticated with the AWS CLI. Use the credentials you set up for packages. These credentials should have limited capabilities
:
export AWS_ACCESS_KEY_ID="your*access*id"
export AWS_SECRET_ACCESS_KEY="your*secret*key"
aws sts get-caller-identity
Login to docker
aws ecr get-login-password |docker login --username AWS --password-stdin 783794618700.dkr.ecr.us-west-2.amazonaws.com
Verify you can pull an image
docker pull 783794618700.dkr.ecr.us-west-2.amazonaws.com/emissary-ingress/emissary:v3.0.0-9ded128b4606165b41aca52271abe7fa44fa7109
If the image downloads successfully, it worked!
You may need to create or update your credentials which you can do with a command like this. Set the environment variables to the proper values before running the command.
kubectl delete secret -n eksa-packages aws-secret
kubectl create secret -n eksa-packages generic aws-secret --from-literal=AWS_ACCESS_KEY_ID=${EKSA_AWS_ACCESS_KEY_ID} --from-literal=AWS_SECRET_ACCESS_KEY=${EKSA_AWS_SECRET_ACCESS_KEY} --from-literal=REGION=${EKSA_AWS_REGION}
If you recreate secrets, you can manually re-enable the cronjob and run the job to update the image pull secrets:
kubectl get cronjob -n eksa-packages cron-ecr-renew -o yaml | yq e '.spec.suspend |= false' - | kubectl apply -f -
kubectl create job -n eksa-packages --from=cronjob/cron-ecr-renew run-it-now
Warning: not able to trigger cron job
secret/aws-secret created
Warning: not able to trigger cron job, please be aware this will prevent the package controller from installing curated packages.
This is most likely caused by an action to install curated packages in a cluster that is running Kubernetes
at version v1.20
or below. Note curated packages only support Kubernetes
v1.21
and above.
Package on workload clusters
Starting at eksctl anywhere
version v0.12.0
, packages on workload clusters are remotely managed by the management cluster. While interacting with the package resources by the following commands for a workload cluster, please make sure the kubeconfig is pointing to the management cluster that was used to create the workload cluster.
Package manager is not managing packages on workload cluster
If the package manager is not managing packages on a workload cluster, make sure the management cluster has various resources for the workload cluster:
kubectl get packages,packagebundles,packagebundlecontrollers -A
You should see a PackageBundleController for the workload cluster named with the name of the workload cluster and the status should be set. There should be a namespace for the workload cluster as well:
kubectl get ns | grep eksa-packagess
Create a PackageBundlecController for the workload cluster if it does not exist (where billy here is the cluster name):
cat <<! | k apply -f -
apiVersion: packages.eks.amazonaws.com/v1alpha1
kind: PackageBundleController
metadata:
name: billy
namespace: eksa-packages
!
Workload cluster is disconnected
Cluster is disconnected:
NAMESPACE NAME ACTIVEBUNDLE STATE DETAIL
eksa-packages packagebundlecontroller.packages.eks.amazonaws.com/billy disconnected initializing target client: getting kubeconfig for cluster "billy": Secret "billy-kubeconfig" not found
In the example above, the secret does not exist which may be that the management cluster is not managing the cluster, the PackageBundleController name is wrong or the secret was deleted.
This also may happen if the management cluster cannot communicate with the workload cluster or the workload cluster was deleted, although the detail would be different.
Error: the server doesn’t have a resource type “packages”
All packages are remotely managed by the management cluster, and packages, packagebundles, and packagebundlecontrollers resources are all deployed on the management cluster. Please make sure the kubeconfig is pointing to the management cluster that was used to create the workload cluster while interacting with package-related resources.
A package command run on a cluster that does not seem to be managed by the management cluster. To get a list of the clusters managed by the management cluster run the following command:
eksctl anywhere get packagebundlecontroller
NAME ACTIVEBUNDLE STATE DETAIL
billy v1-21-87 active
There will be one packagebundlecontroller for each cluster that is being managed. The only valid cluster name in the above example is billy
.
3 - Cert-Manager
Install/update/upgrade/uninstall Cert-Manager
If you have not already done so, make sure your cluster meets the package prerequisites.
Be sure to refer to the troubleshooting guide
in the event of a problem.
Important
- Starting at
eksctl anywhere
version v0.12.0
, packages on workload clusters are remotely managed by the management cluster.
- While following this guide to install packages on a workload cluster, please make sure the
kubeconfig
is pointing to the management cluster that was used to create the workload cluster. The only exception is the kubectl create namespace
command below, which should be run with kubeconfig
pointing to the workload cluster.
Install on workload cluster
NOTE: The cert-manager package can only be installed on a workload cluster
-
Generate the package configuration
eksctl anywhere generate package cert-manager --cluster <cluster-name> > cert-manager.yaml
-
Add the desired configuration to cert-manager.yaml
Please see complete configuration options
for all configuration options and their default values.
Example package file configuring a cert-manager package to run on a workload cluster.
apiVersion: packages.eks.amazonaws.com/v1alpha1
kind: Package
metadata:
name: my-cert-manager
namespace: eksa-packages-<cluster-name>
spec:
packageName: cert-manager
targetNamespace: <namespace-to-install-component>
-
Install Cert-Manager
eksctl anywhere create packages -f cert-manager.yaml
-
Validate the installation
eksctl anywhere get packages --cluster <cluster-name>
Example command output
NAME PACKAGE AGE STATE CURRENTVERSION TARGETVERSION DETAIL
my-cert-manager cert-manager 15s installed 1.9.1-dc0c845b5f71bea6869efccd3ca3f2dd11b5c95f 1.9.1-dc0c845b5f71bea6869efccd3ca3f2dd11b5c95f (latest)
Update
To update package configuration, update cert-manager.yaml file, and run the following command:
eksctl anywhere apply package -f cert-manager.yaml
Upgrade
Cert-Manager will automatically be upgraded when a new bundle is activated.
Uninstall
To uninstall cert-manager, simply delete the package
eksctl anywhere delete package --cluster <cluster-name> cert-manager
4 - Cluster Autoscaler
Install/upgrade/uninstall Cluster Autoscaler
If you have not already done so, make sure your cluster meets the package prerequisites.
Be sure to refer to the troubleshooting guide
in the event of a problem.
Important
- Starting at
eksctl anywhere
version v0.12.0
, packages on workload clusters are remotely managed by the management cluster.
- While following this guide to install packages on a workload cluster, please make sure the
kubeconfig
is pointing to the management cluster that was used to create the workload cluster. The only exception is the kubectl create namespace
command below, which should be run with kubeconfig
pointing to the workload cluster.
Choose a Deployment Approach
Each Cluster Autoscaler instance can target one cluster for autoscaling.
There are three ways to deploy a Cluster Autoscaler instance:
- Cluster Autoscaler deployed in the management cluster to autoscale the management cluster itself
- Cluster Autoscaler deployed in the management cluster to autoscale a remote workload cluster
- Cluster Autoscaler deployed in the workload cluster to autoscale the workload cluster itself
To read more about the tradeoffs of these different approaches, see here
.
Install Cluster Autoscaler in management cluster
-
Ensure you have configured at least one WorkerNodeGroup in your cluster to support autoscaling as outlined here
-
Generate the package configuration
eksctl anywhere generate package cluster-autoscaler --cluster <cluster-name> > cluster-autoscaler.yaml
-
Add the desired configuration to cluster-autoscaler.yaml
Please see complete configuration options
for all configuration options and their default values.
Example package file configuring a cluster autoscaler package to run in the management cluster.
Note: Here, the <cluster-name>
value represents the name of the management or workload cluster you would like to autoscale.
apiVersion: packages.eks.amazonaws.com/v1alpha1
kind: Package
metadata:
name: cluster-autoscaler
namespace: eksa-packages-<cluster-name>
spec:
packageName: cluster-autoscaler
targetNamespace: <namespace-to-install-component>
config: |-
cloudProvider: "clusterapi"
autoDiscovery:
clusterName: "<cluster-name>"
-
Install Cluster Autoscaler
eksctl anywhere create packages -f cluster-autoscaler.yaml
-
Validate the installation
eksctl anywhere get packages --cluster <cluster-name>
Example command output
NAMESPACE NAME PACKAGE AGE STATE CURRENTVERSION TARGETVERSION DETAIL
eksa-packages-mgmt-v-vmc cluster-autoscaler cluster-autoscaler 18h installed 9.21.0-1.21-147e2a701f6ab625452fe311d5c94a167270f365 9.21.0-1.21-147e2a701f6ab625452fe311d5c94a167270f365 (latest)
Update
To update package configuration, update cluster-autoscaler.yaml file, and run the following command:
eksctl anywhere apply package -f cluster-autoscaler.yaml
Upgrade
Cluster Autoscaler will automatically be upgraded when a new bundle is activated.
Uninstall
To uninstall Cluster Autoscaler, simply delete the package
eksctl anywhere delete package --cluster <cluster-name> cluster-autoscaler
Install Cluster Autoscaler in workload cluster
A few extra steps are required to install cluster autoscaler in a workload cluster instead of the management cluster.
First, retrieve the management cluster’s kubeconfig secret:
kubectl -n eksa-system get secrets <management-cluster-name>-kubeconfig -o yaml > mgmt-secret.yaml
Update the secret’s namespace to the namespace in the workload cluster that you would like to deploy the cluster autoscaler to.
Then, apply the secret to the workload cluster.
kubectl --kubeconfig /path/to/workload/kubeconfig apply -f mgmt-secret.yaml
Now apply this package configuration to the management cluster:
apiVersion: packages.eks.amazonaws.com/v1alpha1
kind: Package
metadata:
name: workload-cluster-autoscaler
namespace: eksa-packages-<workload-cluster-name>
spec:
packageName: cluster-autoscaler
targetNamespace: <workload-cluster-namespace-to-install-components>
config: |-
cloudProvider: "clusterapi"
autoDiscovery:
clusterName: "<workload-cluster-name>"
clusterAPIMode: "incluster-kubeconfig"
clusterAPICloudConfigPath: "/etc/kubernetes/value"
extraVolumeSecrets:
cluster-autoscaler-cloud-config:
mountPath: "/etc/kubernetes"
name: "<management-cluster-name>-kubeconfig"
5 - Metrics Server
Install/upgrade/uninstall Metrics Server
If you have not already done so, make sure your cluster meets the package prerequisites.
Be sure to refer to the troubleshooting guide
in the event of a problem.
Important
- Starting at
eksctl anywhere
version v0.12.0
, packages on workload clusters are remotely managed by the management cluster.
- While following this guide to install packages on a workload cluster, please make sure the
kubeconfig
is pointing to the management cluster that was used to create the workload cluster. The only exception is the kubectl create namespace
command below, which should be run with kubeconfig
pointing to the workload cluster.
Install
-
Generate the package configuration
eksctl anywhere generate package metrics-server --cluster <cluster-name> > metrics-server.yaml
-
Add the desired configuration to metrics-server.yaml
Please see complete configuration options
for all configuration options and their default values.
Example package file configuring a cluster autoscaler package to run on a management cluster.
apiVersion: packages.eks.amazonaws.com/v1alpha1
kind: Package
metadata:
name: metrics-server
namespace: eksa-packages-<cluster-name>
spec:
packageName: metrics-server
targetNamespace: <namespace-to-install-component>
config: |-
args:
- "--kubelet-insecure-tls"
-
Install Metrics Server
eksctl anywhere create packages -f metrics-server.yaml
-
Validate the installation
eksctl anywhere get packages --cluster <cluster-name>
Example command output
NAME PACKAGE AGE STATE CURRENTVERSION TARGETVERSION DETAIL
metrics-server metrics-server 8h installed 0.6.1-eks-1-23-6-b4c2524fabb3dd4c5f9b9070a418d740d3e1a8a2 0.6.1-eks-1-23-6-b4c2524fabb3dd4c5f9b9070a418d740d3e1a8a2 (latest)
Update
To update package configuration, update metrics-server.yaml file, and run the following command:
eksctl anywhere apply package -f metrics-server.yaml
Upgrade
Metrics Server will automatically be upgraded when a new bundle is activated.
Uninstall
To uninstall Metrics Server, simply delete the package
eksctl anywhere delete package --cluster <cluster-name> metrics-server
6 - AWS Distro for OpenTelemetry (ADOT)
Install/upgrade/uninstall ADOT
If you have not already done so, make sure your cluster meets the package prerequisites.
Be sure to refer to the troubleshooting guide
in the event of a problem.
Important
- Starting at
eksctl anywhere
version v0.12.0
, packages on workload clusters are remotely managed by the management cluster.
- While following this guide to install packages on a workload cluster, please make sure the
kubeconfig
is pointing to the management cluster that was used to create the workload cluster. The only exception is the kubectl create namespace
command below, which should be run with kubeconfig
pointing to the workload cluster.
Install
-
Generate the package configuration
eksctl anywhere generate package adot --cluster <cluster-name> > adot.yaml
-
Add the desired configuration to adot.yaml
Please see complete configuration options
for all configuration options and their default values.
Example package file with daemonSet
mode and default configuration:
apiVersion: packages.eks.amazonaws.com/v1alpha1
kind: Package
metadata:
name: my-adot
namespace: eksa-packages-<cluster-name>
spec:
packageName: adot
targetNamespace: observability
config: |
mode: daemonset
Example package file with deployment
mode and customized collector components to scrap
ADOT collector’s own metrics:
apiVersion: packages.eks.amazonaws.com/v1alpha1
kind: Package
metadata:
name: my-adot
namespace: eksa-packages-<cluster-name>
spec:
packageName: adot
targetNamespace: observability
config: |
mode: deployment
replicaCount: 2
config:
receivers:
prometheus:
config:
scrape_configs:
- job_name: opentelemetry-collector
scrape_interval: 10s
static_configs:
- targets:
- ${MY_POD_IP}:8888
processors:
batch: {}
memory_limiter: null
exporters:
logging:
loglevel: debug
prometheusremotewrite:
endpoint: "<prometheus-remote-write-end-point>"
extensions:
health_check: {}
memory_ballast: {}
service:
pipelines:
metrics:
receivers: [prometheus]
processors: [batch]
exporters: [logging, prometheusremotewrite]
telemetry:
metrics:
address: 0.0.0.0:8888
-
Create the namespace
(If overriding targetNamespace
, change observability
to the value of targetNamespace
)
kubectl create namespace observability
-
Install adot
eksctl anywhere create packages -f adot.yaml
-
Validate the installation
eksctl anywhere get packages --cluster <cluster-name>
Example command output
NAME PACKAGE AGE STATE CURRENTVERSION TARGETVERSION DETAIL
my-adot adot 19h installed 0.25.0-c26690f90d38811dbb0e3dad5aea77d1efa52c7b 0.25.0-c26690f90d38811dbb0e3dad5aea77d1efa52c7b (latest)
Update
To update package configuration, update adot.yaml file, and run the following command:
eksctl anywhere apply package -f adot.yaml
Upgrade
ADOT will automatically be upgraded when a new bundle is activated.
Uninstall
To uninstall ADOT, simply delete the package
eksctl anywhere delete package --cluster <cluster-name> my-adot
7 - Prometheus
Install/upgrade/uninstall Prometheus
If you have not already done so, make sure your cluster meets the package prerequisites.
Be sure to refer to the troubleshooting guide
in the event of a problem.
Important
- Starting at
eksctl anywhere
version v0.12.0
, packages on workload clusters are remotely managed by the management cluster.
- While following this guide to install packages on a workload cluster, please make sure the
kubeconfig
is pointing to the management cluster that was used to create the workload cluster. The only exception is the kubectl create namespace
command below, which should be run with kubeconfig
pointing to the workload cluster.
Install
-
Generate the package configuration
eksctl anywhere generate package prometheus --cluster <cluster-name> > prometheus.yaml
-
Add the desired configuration to prometheus.yaml
Please see complete configuration options
for all configuration options and their default values.
Example package file with default configuration, which enables prometheus-server and node-exporter:
apiVersion: packages.eks.amazonaws.com/v1alpha1
kind: Package
metadata:
name: generated-prometheus
namespace: eksa-packages-<cluster-name>
spec:
packageName: prometheus
Example package file with prometheus-server (or node-exporter) disabled:
apiVersion: packages.eks.amazonaws.com/v1alpha1
kind: Package
metadata:
name: generated-prometheus
namespace: eksa-packages-<cluster-name>
spec:
packageName: prometheus
config: |
# disable prometheus-server
server:
enabled: false
# or disable node-exporter
# nodeExporter:
# enabled: false
Example package file with prometheus-server deployed as a statefulSet with replicaCount 2, and set scrape config to collect Prometheus-server’s own metrics only:
apiVersion: packages.eks.amazonaws.com/v1alpha1
kind: Package
metadata:
name: generated-prometheus
namespace: eksa-packages-<cluster-name>
spec:
packageName: prometheus
targetNamespace: observability
config: |
server:
replicaCount: 2
statefulSet:
enabled: true
serverFiles:
prometheus.yml:
scrape_configs:
- job_name: prometheus
static_configs:
- targets:
- localhost:9090
-
Create the namespace
(If overriding targetNamespace
, change observability
to the value of targetNamespace
)
kubectl create namespace observability
-
Install prometheus
eksctl anywhere create packages -f prometheus.yaml
-
Validate the installation
eksctl anywhere get packages --cluster <cluster-name>
Example command output
NAMESPACE NAME PACKAGE AGE STATE CURRENTVERSION TARGETVERSION DETAIL
eksa-packages-<cluster-name> generated-prometheus prometheus 17m installed 2.41.0-b53c8be243a6cc3ac2553de24ab9f726d9b851ca 2.41.0-b53c8be243a6cc3ac2553de24ab9f726d9b851ca (latest)
Update
To update package configuration, update prometheus.yaml file, and run the following command:
eksctl anywhere apply package -f prometheus.yaml
Upgrade
Prometheus will automatically be upgraded when a new bundle is activated.
Uninstall
To uninstall Prometheus, simply delete the package
eksctl anywhere delete package --cluster <cluster-name> generated-prometheus
8 - Emissary Ingress
Install/upgrade/uninstall Emissary Ingress
If you have not already done so, make sure your cluster meets the package prerequisites.
Be sure to refer to the troubleshooting guide
in the event of a problem.
Important
- Starting at
eksctl anywhere
version v0.12.0
, packages on workload clusters are remotely managed by the management cluster.
- While following this guide to install packages on a workload cluster, please make sure the
kubeconfig
is pointing to the management cluster that was used to create the workload cluster. The only exception is the kubectl create namespace
command below, which should be run with kubeconfig
pointing to the workload cluster.
Install
-
Generate the package configuration
eksctl anywhere generate package emissary --cluster <cluster-name> > emissary.yaml
-
Add the desired configuration to emissary.yaml
Please see complete configuration options
for all configuration options and their default values.
Example package file with standard configuration.
apiVersion: packages.eks.amazonaws.com/v1alpha1
kind: Package
metadata:
name: emissary
namespace: eksa-packages-<cluster-name>
spec:
packageName: emissary
-
Install Emissary
eksctl anywhere create packages -f emissary.yaml
-
Validate the installation
eksctl anywhere get packages --cluster <cluster-name>
Example command output
NAMESPACE NAME PACKAGE AGE STATE CURRENTVERSION TARGETVERSION DETAIL
eksa-packages emissary emissary 2m57s installed 3.0.0-a507e09c2a92c83d65737835f6bac03b9b341467 3.0.0-a507e09c2a92c83d65737835f6bac03b9b341467 (latest)
Update
To update package configuration, update emissary.yaml file, and run the following command:
eksctl anywhere apply package -f emissary.yaml
Upgrade
Emissary will automatically be upgraded when a new bundle is activated.
Uninstall
To uninstall Emissary, simply delete the package
eksctl anywhere delete package --cluster <cluster-name> emissary
9 - Harbor
Install/upgrade/uninstall Harbor
If you have not already done so, make sure your cluster meets the package prerequisites.
Be sure to refer to the troubleshooting guide
in the event of a problem.
Important
- Starting at
eksctl anywhere
version v0.12.0
, packages on workload clusters are remotely managed by the management cluster.
- While following this guide to install packages on a workload cluster, please make sure the
kubeconfig
is pointing to the management cluster that was used to create the workload cluster. The only exception is the kubectl create namespace
command below, which should be run with kubeconfig
pointing to the workload cluster.
Install
-
Generate the package configuration
eksctl anywhere generate package harbor --cluster <cluster-name> > harbor.yaml
-
Add the desired configuration to harbor.yaml
Please see complete configuration options
for all configuration options and their default values.
Important
- All configuration options are listed in dot notations (e.g.,
expose.tls.enabled
) in the doc, but they have to be transformed to hierachical structures when specified in the config
section in the YAML spec.
- Harbor web portal is exposed through
NodePort
by default, and its default port number is 30003
with TLS enabled and 30002
with TLS disabled.
- TLS is enabled by default for connections to Harbor web portal, and a secret resource named
harbor-tls-secret
is required for that purpose. It can be provisioned through cert-manager or manually with the following command using self-signed certificate:
kubectl create secret tls harbor-tls-secret --cert=[path to certificate file] --key=[path to key file] -n eksa-packages
secretKey
has to be set as a string of 16 characters for encryption.
TLS example with auto certificate generation
apiVersion: packages.eks.amazonaws.com/v1alpha1
kind: Package
metadata:
name: my-harbor
namespace: eksa-packages-<cluster-name>
spec:
packageName: harbor
config: |-
secretKey: "use-a-secret-key"
externalURL: https://harbor.eksa.demo:30003
expose:
tls:
certSource: auto
auto:
commonName: "harbor.eksa.demo"
Non-TLS example
apiVersion: packages.eks.amazonaws.com/v1alpha1
kind: Package
metadata:
name: my-harbor
namespace: eksa-packages-<cluster-name>
spec:
packageName: harbor
config: |-
secretKey: "use-a-secret-key"
externalURL: http://harbor.eksa.demo:30002
expose:
tls:
enabled: false
-
Install Harbor
eksctl anywhere create packages -f harbor.yaml
-
Check Harbor
eksctl anywhere get packages --cluster <cluster-name>
Example command output
NAME PACKAGE AGE STATE CURRENTVERSION TARGETVERSION DETAIL
my-harbor harbor 5m34s installed v2.5.1 v2.5.1 (latest)
Harbor web portal is accessible at whatever externalURL
is set to. See complete configuration options
for all default values.
Update
To update package configuration, update harbor.yaml file, and run the following command:
eksctl anywhere apply package -f harbor.yaml
Upgrade
Note
- New versions of software packages will be automatically downloaded but not automatically installed. You can always manually run
eksctl
to check and install updates.
-
Verify a new bundle is available
eksctl anywhere get packagebundle
Example command output
NAME VERSION STATE
v1.25-120 1.25 active (upgrade available)
v1.26-120 1.26 inactive
-
Upgrade Harbor
eksctl anywhere upgrade packages --bundle-version v1.26-120
-
Check Harbor
eksctl anywhere get packages --cluster <cluster-name>
Example command output
NAME PACKAGE AGE STATE CURRENTVERSION TARGETVERSION DETAIL
my-harbor Harbor 14m installed v2.7.1 v2.7.1 (latest)
Uninstall
-
Uninstall Harbor
Important
- By default, PVCs created for jobservice and registry are not removed during a package delete operation, which can be changed by leaving
persistence.resourcePolicy
empty.
eksctl anywhere delete package --cluster <cluster-name> my-harbor
10 - MetalLB
Install/upgrade/uninstall MetalLB
If you have not already done so, make sure your cluster meets the package prerequisites.
Be sure to refer to the troubleshooting guide
in the event of a problem.
Important
- Starting at
eksctl anywhere
version v0.12.0
, packages on workload clusters are remotely managed by the management cluster.
- While following this guide to install packages on a workload cluster, please make sure the
kubeconfig
is pointing to the management cluster that was used to create the workload cluster. The only exception is the kubectl create namespace
command below, which should be run with kubeconfig
pointing to the workload cluster.
Install
-
Generate the package configuration
eksctl anywhere generate package metallb --cluster <cluster-name> > metallb.yaml
-
Add the desired configuration to metallb.yaml
Please see complete configuration options
for all configuration options and their default values.
Example package file with bgp configuration:
apiVersion: packages.eks.amazonaws.com/v1alpha1
kind: Package
metadata:
name: mylb
namespace: eksa-packages-<cluster-name>
spec:
packageName: metallb
config: |
IPAddressPools:
- name: default
addresses:
- 10.220.0.93/32
- 10.220.0.97-10.220.0.120
BGPAdvertisements:
- ipAddressPools:
- default
BGPPeers:
- peerAddress: 10.220.0.2
peerASN: 65000
myASN: 65002
Example package file with ARP configuration:
apiVersion: packages.eks.amazonaws.com/v1alpha1
kind: Package
metadata:
name: mylb
namespace: eksa-packages
spec:
packageName: metallb
config: |
IPAddressPools:
- name: default
addresses:
- 10.220.0.93/32
- 10.220.0.97-10.220.0.120
L2Advertisements:
- ipAddressPools:
- default
-
Create the namespace
(If overriding targetNamespace
, change metallb-system
to the value of targetNamespace
)
kubectl create namespace metallb-system
-
Install MetalLB
eksctl anywhere create packages -f metallb.yaml
-
Validate the installation
eksctl anywhere get packages --cluster <cluster-name>
Example command output
NAME PACKAGE AGE STATE CURRENTVERSION TARGETVERSION DETAIL
mylb metallb 22h installed 0.13.5-ce5b5de19014202cebd4ab4c091830a3b6dfea06 0.13.5-ce5b5de19014202cebd4ab4c091830a3b6dfea06 (latest)
Update
To update package configuration, update metallb.yaml file, and run the following command:
eksctl anywhere apply package -f metallb.yaml
Upgrade
MetalLB will automatically be upgraded when a new bundle is activated.
Uninstall
To uninstall MetalLB, simply delete the package
eksctl anywhere delete package --cluster <cluster-name> mylb