This is the multi-page printable view of this section.
Click here to print.
Return to the regular view of this page.
Getting started
The Getting started section includes information on starting to set up your own EKS Anywhere local or production environment.
EKS Anywhere can be deployed as a simple, unsupported local environment or as a production-quality environment that can become a supported on-premises Kubernetes platform.
This section lists the different ways to set up and run EKS Anywhere.
When you install EKS Anywhere, choose an installation type based on: ease of maintenance, security, control, available resources, and expertise required to operate and manage a cluster.
Install EKS Anywhere
To create an EKS Anywhere cluster you’ll need to download the command line tool that is used to create and manage a cluster.
You can install it using the installation guide
Local environment
If you just want to try out EKS Anywhere, there is a single-system method for installing and running EKS Anywhere using Docker.
See EKS Anywhere local environment
.
Production environment
When evaluating a solution for a production environment
consider deploying EKS Anywhere on providers listed on the Create production cluster
page.
1 - Install EKS Anywhere
EKS Anywhere will create and manage Kubernetes clusters on multiple providers.
Currently we support creating development clusters locally using Docker and production clusters from providers listed on the Create production cluster
page.
Creating an EKS Anywhere cluster begins with setting up an Administrative machine where you will run Docker and add some binaries.
From there, you create the cluster for your chosen provider.
See Create cluster workflow
for an overview of the cluster creation process.
To create an EKS Anywhere cluster you will need eksctl
and the eksctl-anywhere
plugin.
This will let you create a cluster in multiple providers for local development or production workloads.
Administrative machine prerequisites
Via Homebrew (macOS and Linux)
Warning
EKS Anywhere only works on computers with x86 and amd64 process architecture.
It currently will not work on computers with Apple Silicon or Arm based processors.
You can install eksctl
and eksctl-anywhere
with homebrew
.
This package will also install kubectl
and the aws-iam-authenticator
which will be helpful to test EKS Anywhere clusters.
brew install aws/tap/eks-anywhere
Manually (macOS and Linux)
Install the latest release of eksctl
.
The EKS Anywhere plugin requires eksctl
version 0.66.0 or newer.
curl "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" \
--silent --location \
| tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin/
Install the eksctl-anywhere
plugin.
export EKSA_RELEASE="0.13.1" OS="$(uname -s | tr A-Z a-z)" RELEASE_NUMBER=26
curl "https://anywhere-assets.eks.amazonaws.com/releases/eks-a/${RELEASE_NUMBER}/artifacts/eks-a/v${EKSA_RELEASE}/${OS}/amd64/eksctl-anywhere-v${EKSA_RELEASE}-${OS}-amd64.tar.gz" \
--silent --location \
| tar xz ./eksctl-anywhere
sudo mv ./eksctl-anywhere /usr/local/bin/
Install the kubectl
Kubernetes command line tool.
This can be done by following the instructions here
.
Or you can install the latest kubectl directly with the following.
export OS="$(uname -s | tr A-Z a-z)" ARCH=$(test "$(uname -m)" = 'x86_64' && echo 'amd64' || echo 'arm64')
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/${OS}/${ARCH}/kubectl"
sudo mv ./kubectl /usr/local/bin
sudo chmod +x /usr/local/bin/kubectl
Upgrade eksctl-anywhere
If you installed eksctl-anywhere
via homebrew you can upgrade the binary with
brew update
brew upgrade eks-anywhere
If you installed eksctl-anywhere
manually you should follow the installation steps to download the latest release.
You can verify your installed version with
Deploy a cluster
Once you have the tools installed you can deploy a local cluster or production cluster in the next steps.
2 - Create local cluster
EKS Anywhere docker provider deployments
EKS Anywhere supports a Docker provider for development and testing use cases only.
This allows you to try EKS Anywhere on your local system before deploying to a supported provider to create either:
- A single, standalone cluster or
- Multiple management/workload clusters on the same provider, as described in Cluster topologies
.
The management/workload topology is recommended for production clusters and can be tried out here using both
eksctl
and GitOps
tools.
Create a standalone cluster
Prerequisite Checklist
To install the EKS Anywhere binaries and see system requirements please follow the installation guide
.
Steps
-
Generate a cluster config
CLUSTER_NAME=mgmt
eksctl anywhere generate clusterconfig $CLUSTER_NAME \
--provider docker > $CLUSTER_NAME.yaml
The command above creates a file named eksa-cluster.yaml with the contents below in the path where it is executed.
The configuration specification is divided into two sections:
- Cluster
- DockerDatacenterConfig
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: mgmt
spec:
clusterNetwork:
cniConfig:
cilium: {}
pods:
cidrBlocks:
- 192.168.0.0/16
services:
cidrBlocks:
- 10.96.0.0/12
controlPlaneConfiguration:
count: 1
datacenterRef:
kind: DockerDatacenterConfig
name: mgmt
externalEtcdConfiguration:
count: 1
kubernetesVersion: "1.24"
managementCluster:
name: mgmt
workerNodeGroupConfigurations:
- count: 1
name: md-0
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: DockerDatacenterConfig
metadata:
name: mgmt
spec: {}
- Apart from the base configuration, you can add additional optional configuration to enable supported features:
-
Configure Curated Packages
The Amazon EKS Anywhere Curated Packages are only available to customers with the Amazon EKS Anywhere Enterprise Subscription. To request a free trial, talk to your Amazon representative or connect with one here
. Cluster creation will succeed if authentication is not set up, but some warnings may be generated. Detailed package configurations can be found here
.
If you are going to use packages, set up authentication. These credentials should have limited capabilities
:
export EKSA_AWS_ACCESS_KEY_ID="your*access*id"
export EKSA_AWS_SECRET_ACCESS_KEY="your*secret*key"
export EKSA_AWS_REGION="us-west-2"
-
Create Cluster:
Note The Amazon EKS Anywhere Curated Packages are only available to customers with the Amazon EKS Anywhere Enterprise Subscription. Due to this there might be some warnings in the CLI if proper authentication is not set up.
eksctl anywhere create cluster -f $CLUSTER_NAME.yaml
Example command output
Performing setup and validations
✅ validation succeeded {"validation": "docker Provider setup is valid"}
Creating new bootstrap cluster
Installing cluster-api providers on bootstrap cluster
Provider specific setup
Creating new workload cluster
Installing networking on workload cluster
Installing cluster-api providers on workload cluster
Moving cluster management from bootstrap to workload cluster
Installing EKS-A custom components (CRD and controller) on workload cluster
Creating EKS-A CRDs instances on workload cluster
Installing GitOps Toolkit on workload cluster
GitOps field not specified, bootstrap flux skipped
Deleting bootstrap cluster
🎉 Cluster created!
----------------------------------------------------------------------------------
The Amazon EKS Anywhere Curated Packages are only available to customers with the
Amazon EKS Anywhere Enterprise Subscription
----------------------------------------------------------------------------------
Installing curated packages controller on management cluster
secret/aws-secret created
job.batch/eksa-auth-refresher created
Note to install curated packages during cluster creation, use --install-packages packages.yaml
flag
-
Use the cluster
Once the cluster is created you can use it with the generated KUBECONFIG
file in your local directory
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
kubectl get ns
Example command output
NAME STATUS AGE
capd-system Active 21m
capi-kubeadm-bootstrap-system Active 21m
capi-kubeadm-control-plane-system Active 21m
capi-system Active 21m
capi-webhook-system Active 21m
cert-manager Active 22m
default Active 23m
eksa-packages Active 23m
eksa-system Active 20m
kube-node-lease Active 23m
kube-public Active 23m
kube-system Active 23m
You can now use the cluster like you would any Kubernetes cluster.
Deploy the test application with:
kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
Verify the test application in the deploy test application section
.
Create management/workload clusters
To try the recommended EKS Anywhere topology
, you can create a management cluster and one or more workload clusters on the same Docker provider.
Prerequisite Checklist
To install the EKS Anywhere binaries and see system requirements please follow the installation guide
.
Create a management cluster
-
Generate a management cluster config (named mgmt
for this example):
CLUSTER_NAME=mgmt
eksctl anywhere generate clusterconfig $CLUSTER_NAME \
--provider docker > eksa-mgmt-cluster.yaml
-
Modify the management cluster config (eksa-mgmt-cluster.yaml
) you could use the same one described earlier or modify it to use GitOps, as shown below:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
name: mgmt
namespace: default
spec:
bundlesRef:
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
name: bundles-1
namespace: eksa-system
clusterNetwork:
cniConfig:
cilium: {}
pods:
cidrBlocks:
- 192.168.0.0/16
services:
cidrBlocks:
- 10.96.0.0/12
controlPlaneConfiguration:
count: 1
datacenterRef:
kind: DockerDatacenterConfig
name: mgmt
externalEtcdConfiguration:
count: 1
gitOpsRef:
kind: FluxConfig
name: mgmt
kubernetesVersion: "1.24"
managementCluster:
name: mgmt
workerNodeGroupConfigurations:
- count: 1
name: md-1
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: DockerDatacenterConfig
metadata:
name: mgmt
namespace: default
spec: {}
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: FluxConfig
metadata:
name: mgmt
namespace: default
spec:
branch: main
clusterConfigPath: clusters/mgmt
github:
owner: <your github account, such as example for https://github.com/example>
personal: true
repository: <your github repo, such as test for https://github.com/example/test>
systemNamespace: flux-system
---
-
Configure Curated Packages
The Amazon EKS Anywhere Curated Packages are only available to customers with the Amazon EKS Anywhere Enterprise Subscription. To request a free trial, talk to your Amazon representative or connect with one here
. Cluster creation will succeed if authentication is not set up, but some warnings may be generated. Detailed package configurations can be found here
.
If you are going to use packages, set up authentication. These credentials should have limited capabilities
:
export EKSA_AWS_ACCESS_KEY_ID="your*access*id"
export EKSA_AWS_SECRET_ACCESS_KEY="your*secret*key"
-
Create cluster
eksctl anywhere create cluster \
# --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
-f eksa-mgmt-cluster.yaml
-
Once the cluster is created you can use it with the generated KUBECONFIG
file in your local directory:
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
-
Check the initial cluster’s CRD:
To ensure you are looking at the initial cluster, list the CRD to see that the name of its management cluster is itself:
kubectl get clusters mgmt -o yaml
Example command output
...
kubernetesVersion: "1.24"
managementCluster:
name: mgmt
workerNodeGroupConfigurations:
...
Create separate workload clusters
Follow these steps to have your management cluster create and manage separate workload clusters.
-
Generate a workload cluster config:
CLUSTER_NAME=w01
eksctl anywhere generate clusterconfig $CLUSTER_NAME \
--provider docker > eksa-w01-cluster.yaml
Refer to the initial config described earlier for the required and optional settings.
NOTE: Ensure workload cluster object names (Cluster
, DockerDatacenterConfig
, DockerMachineConfig
, etc.) are distinct from management cluster object names. Be sure to set the managementCluster
field to identify the name of the management cluster.
-
Create a workload cluster in one of the following ways:
See Manage cluster with GitOps
for more details.
- eksctl CLI: Useful for temporary cluster configurations. To create a workload cluster with
eksctl
, run:
eksctl anywhere create cluster \
-f eksa-w01-cluster.yaml \
# --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
--kubeconfig mgmt/mgmt-eks-a-cluster.kubeconfig
As noted earlier, adding the --kubeconfig
option tells eksctl
to use the management cluster identified by that kubeconfig file to create a different workload cluster.
-
To check the workload cluster, get the workload cluster credentials and run a test workload:
-
If your workload cluster was created with eksctl
,
change your credentials to point to the new workload cluster (for example, w01
), then run the test application with:
export CLUSTER_NAME=w01
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
-
If your workload cluster was created with GitOps, you can get credentials and run the test application as follows:
kubectl get secret -n eksa-system w01-kubeconfig -o jsonpath=‘{.data.value}' | base64 —decode > w01.kubeconfig
export KUBECONFIG=w01.kubeconfig
kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
NOTE: For Docker, you must modify the server
field of the kubeconfig file by replacing the IP with 127.0.0.1
and the port with its value.
The port’s value can be found by running docker ps
and checking the workload cluster’s load balancer.
-
Add more workload clusters:
To add more workload clusters, go through the same steps for creating the initial workload, copying the config file to a new name (such as eksa-w02-cluster.yaml
), modifying resource names, and running the create cluster command again.
Next steps:
-
See the Cluster management
section for more information on common operational tasks like scaling and deleting the cluster.
-
See the Package management
section for more information on post-creation curated packages installation.
3 - Create production cluster
EKS Anywhere allows you to provision and manage Amazon EKS on your own infrastructure.
To get started with different production-quality EKS Anywhere providers, choose from the providers below:
3.1 - Create Bare Metal production cluster
Create a production-quality cluster on Bare Metal
EKS Anywhere supports a Bare Metal provider for production grade EKS Anywhere deployments.
EKS Anywhere allows you to provision and manage Kubernetes clusters based on Amazon EKS software on your own infrastructure.
This document walks you through setting up EKS Anywhere on Bare Metal as a standalone, self-managed cluster or combined set of management/workload clusters.
See Cluster topologies
for details.
Prerequisite checklist
EKS Anywhere needs:
Also, see the Ports and protocols
page for information on ports that need to be accessible from control plane, worker, and Admin machines.
Steps
The following steps are divided into two sections:
- Create an initial cluster (used as a management or self-managed cluster)
- Create zero or more workload clusters from the management cluster
Create an initial cluster
Follow these steps to create an EKS Anywhere cluster that can be used either as a management cluster or as a self-managed cluster (for running workloads itself).
-
Set an environment variables for your cluster name
-
Generate a cluster config file for your Bare Metal provider (using tinkerbell as the provider type).
eksctl anywhere generate clusterconfig $CLUSTER_NAME --provider tinkerbell > eksa-mgmt-cluster.yaml
-
Modify the cluster config (eksa-mgmt-cluster.yaml
) by referring to the Bare Metal configuration
reference documentation.
-
Set License Environment Variable
If you are creating a licensed cluster, set and export the license variable (see License cluster
if you are licensing an existing cluster):
export EKSA_LICENSE='my-license-here'
After you have created your eksa-mgmt-cluster.yaml
and set your credential environment variables, you will be ready to create the cluster.
-
Configure Curated Packages
The Amazon EKS Anywhere Curated Packages are only available to customers with the Amazon EKS Anywhere Enterprise Subscription. To request a free trial, talk to your Amazon representative or connect with one here
. Cluster creation will succeed if authentication is not set up, but some warnings may be generated. Detailed package configurations can be found here
.
If you are going to use packages, set up authentication. These credentials should have limited capabilities
:
export EKSA_AWS_ACCESS_KEY_ID="your*access*id"
export EKSA_AWS_SECRET_ACCESS_KEY="your*secret*key"
export EKSA_AWS_REGION="us-west-2"
-
Create the cluster, using the hardware.csv
file you made in Bare Metal preparation
:
eksctl anywhere create cluster \
--hardware-csv hardware.csv \
# --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
-f eksa-mgmt-cluster.yaml
-
Once the cluster is created you can use it with the generated KUBECONFIG
file in your local directory:
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
-
Check the cluster nodes:
To check that the cluster completed, list the machines to see the control plane and worker nodes:
Example command output:
NAMESPACE NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
eksa-system mgmt-47zj8 mgmt eksa-node01 tinkerbell://eksa-system/eksa-node01 Running 1h v1.23.7-eks-1-23-4
eksa-system mgmt-md-0-7f79df46f-wlp7w mgmt eksa-node02 tinkerbell://eksa-system/eksa-node02 Running 1h v1.23.7-eks-1-23-4
...
-
Check the cluster:
You can now use the cluster as you would any Kubernetes cluster.
To try it out, run the test application with:
export CLUSTER_NAME=mgmt
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
Verify the test application in Deploy test workload
.
Create separate workload clusters
Follow these steps if you want to use your initial cluster to create and manage separate workload clusters.
-
Generate a workload cluster config:
CLUSTER_NAME=w01
eksctl anywhere generate clusterconfig $CLUSTER_NAME \
--provider tinkerbell > eksa-w01-cluster.yaml
Refer to the initial config described earlier for the required and optional settings.
Ensure workload cluster object names (Cluster
, TinkerbellDatacenterConfig
, TinkerbellMachineConfig
, etc.) are distinct from management cluster object names. Be sure to set the managementCluster
field to identify the name of the management cluster. Keep the tinkerbellIP of workload cluster the same as tinkerbellIP of the management cluster.
-
Set License Environment Variable
Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster
if you are licensing an existing cluster):
export EKSA_LICENSE='my-license-here'
-
Create a workload cluster
To create a new workload cluster from your management cluster run this command, identifying:
- The workload cluster YAML file
- The initial cluster’s credentials (this causes the workload cluster to be managed from the management cluster)
With hardware CSV
eksctl anywhere create cluster \
-f eksa-w01-cluster.yaml \
# --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
--hardware-csv <hardware.csv>
--kubeconfig mgmt/mgmt-eks-a-cluster.kubeconfig
Without hardware CSV
eksctl anywhere create cluster \
-f eksa-w01-cluster.yaml \
# --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
--kubeconfig mgmt/mgmt-eks-a-cluster.kubeconfig
As noted earlier, adding the --kubeconfig
option tells eksctl
to use the management cluster identified by that kubeconfig file to create a different workload cluster.
-
Check the workload cluster:
You can now use the workload cluster as you would any Kubernetes cluster.
Change your credentials to point to the new workload cluster (for example, mgmt-w01
), then run the test application with:
export CLUSTER_NAME=mgmt-w01
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
Verify the test application in the deploy test application section.
-
Add more workload clusters:
To add more workload clusters, go through the same steps for creating the initial workload, copying the config file to a new name (such as eksa-w02-cluster.yaml
), modifying resource names, and running the create cluster command again.
Next steps:
-
See the Cluster management
section for more information on common operational tasks like deleting the cluster.
-
See the Package management
section for more information on post-creation curated packages installation.
3.2 - Create CloudStack production cluster
Create a production-quality cluster on CloudStack
EKS Anywhere supports a CloudStack provider for production grade EKS Anywhere deployments.
This document walks you through setting up EKS Anywhere on CloudStack in a way that:
- Deploys an initial cluster on your CloudStack environment. That cluster can be used as a standalone cluster (to run workloads) or a management cluster (to create and manage other clusters)
- Deploys zero or more workload clusters from the management cluster
If your initial cluster is a management cluster, it is intended to stay in place so you can use it later to modify, upgrade, and delete workload clusters.
Using a management cluster makes it faster to provision and delete workload clusters.
Also it lets you keep CloudStack credentials for a set of clusters in one place: on the management cluster.
The alternative is to simply use your initial cluster to run workloads.
See Cluster topologies
for details.
Important
Creating an EKS Anywhere management cluster is the recommended model.
Separating management features into a separate, persistent management cluster
provides a cleaner model for managing the lifecycle of workload clusters (to create, upgrade, and delete clusters), while workload clusters run user applications.
This approach also reduces provider permissions for workload clusters.
Prerequisite Checklist
EKS Anywhere needs to:
Also, see the Ports and protocols
page for information on ports that need to be accessible from control plane, worker, and Admin machines.
Steps
The following steps are divided into two sections:
- Create an initial cluster (used as a management or standalone cluster)
- Create zero or more workload clusters from the management cluster
Create an initial cluster
Follow these steps to create an EKS Anywhere cluster that can be used either as a management cluster or as a standalone cluster (for running workloads itself).
-
Generate an initial cluster config (named mgmt
for this example):
export CLUSTER_NAME=mgmt
eksctl anywhere generate clusterconfig $CLUSTER_NAME \
--provider cloudstack > eksa-mgmt-cluster.yaml
-
Create credential file
Create a credential file (for example, cloud-config
) and add the credentials needed to access your CloudStack environment. The file should include:
- api-key: Obtained from CloudStack
- secret-key: Obtained from CloudStack
- api-url: The URL to your CloudStack API endpoint
For example:
[Global]
api-key = -Dk5uB0DE3aWng
secret-key = -0DQLunsaJKxCEEHn44XxP80tv6v_RB0DiDtdgwJ
api-url = http://172.16.0.1:8080/client/api
You can have multiple credential entries.
To match this example, you would enter global
as the credentialsRef in the cluster config file for your CloudStack availability zone. You can configure multiple credentials for multiple availability zones.
-
Modify the initial cluster config (eksa-mgmt-cluster.yaml
) as follows:
- Refer to Cloudstack configuration
for information on configuring this cluster config for a CloudStack provider.
- Add Optional
configuration settings as needed.
- Create at least two control plane nodes, three worker nodes, and three etcd nodes for a production cluster, to provide high availability and rolling upgrades.
-
Set Environment Variables
Convert the credential file into base64 and set the following environment variable to that value:
export EKSA_CLOUDSTACK_B64ENCODED_SECRET=$(base64 -i cloud-config)
-
Set License Environment Variable
Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster
if you are licensing an existing cluster):
export EKSA_LICENSE='my-license-here'
-
Configure Curated Packages
The Amazon EKS Anywhere Curated Packages are only available to customers with the Amazon EKS Anywhere Enterprise Subscription. To request a free trial, talk to your Amazon representative or connect with one here
. Cluster creation will succeed if authentication is not set up, but some warnings may be generated. Detailed package configurations can be found here
.
If you are going to use packages, set up authentication. These credentials should have limited capabilities
:
export EKSA_AWS_ACCESS_KEY_ID="your*access*id"
export EKSA_AWS_SECRET_ACCESS_KEY="your*secret*key"
export EKSA_AWS_REGION="us-west-2"
-
Disable Kubevip load balancer
Skip this step if you want to use the Kubevip load balancer with your cluster. If you want to use a different load balancer, you can disable Kubevip as follows:
export CLOUDSTACK_KUBE_VIP_DISABLED=true
-
Create cluster
eksctl anywhere create cluster \
# --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
-f eksa-mgmt-cluster.yaml
-
Once the cluster is created you can use it with the generated KUBECONFIG
file in your local directory:
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
-
Check the cluster nodes:
To check that the cluster completed, list the machines to see the control plane, etcd, and worker nodes:
Example command output
NAMESPACE NAME PROVIDERID PHASE VERSION
eksa-system mgmt-b2xyz cloudstack:/xxxxx Running v1.23.1-eks-1-21-5
eksa-system mgmt-etcd-r9b42 cloudstack:/xxxxx Running
eksa-system mgmt-md-8-6xr-rnr cloudstack:/xxxxx Running v1.23.1-eks-1-21-5
...
The etcd machine doesn’t show the Kubernetes version because it doesn’t run the kubelet service.
-
Check the initial cluster’s CRD:
To ensure you are looking at the initial cluster, list the CRD to see that the name of its management cluster is itself:
kubectl get clusters mgmt -o yaml
Example command output
...
kubernetesVersion: "1.23"
managementCluster:
name: mgmt
workerNodeGroupConfigurations:
...
Note
The initial cluster is now ready to deploy workload clusters.
However, if you just want to use it to run workloads, you can deploy pod workloads directly on the initial cluster without deploying a separate workload cluster and skip the section on running separate workload clusters.
To make sure the cluster is ready to run workloads, run the test application in the
Deploy test workload section.
Create separate workload clusters
Follow these steps if you want to use your initial cluster to create and manage separate workload clusters.
-
Generate a workload cluster config:
CLUSTER_NAME=w01
eksctl anywhere generate clusterconfig $CLUSTER_NAME \
--provider cloudstack > eksa-w01-cluster.yaml
-
Modify the workload cluster config (eksa-w01-cluster.yaml
) as follows.
Refer to the initial config described earlier for the required and optional settings. In particular:
- Ensure workload cluster object names (
Cluster
, CloudDatacenterConfig
, CloudStackMachineConfig
, etc.) are distinct from management cluster object names.
- Be sure to set the
managementCluster
field to identify the name of the management cluster.
-
Set License Environment Variable
Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster
if you are licensing an existing cluster):
export EKSA_LICENSE='my-license-here'
-
Create a workload cluster
To create a new workload cluster from your management cluster run this command, identifying:
- The workload cluster YAML file
- The initial cluster’s credentials (this causes the workload cluster to be managed from the management cluster)
eksctl anywhere create cluster \
-f eksa-w01-cluster.yaml \
# --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
--kubeconfig mgmt/mgmt-eks-a-cluster.kubeconfig
As noted earlier, adding the --kubeconfig
option tells eksctl
to use the management cluster identified by that kubeconfig file to create a different workload cluster.
-
Check the workload cluster:
You can now use the workload cluster as you would any Kubernetes cluster.
Change your credentials to point to the new workload cluster (for example, mgmt-w01
), then run the test application with:
export CLUSTER_NAME=mgmt-w01
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
Verify the test application in the deploy test application section.
-
Add more workload clusters:
To add more workload clusters, go through the same steps for creating the initial workload, copying the config file to a new name (such as eksa-w02-cluster.yaml
), modifying resource names, and running the create cluster command again.
Next steps:
-
See the Cluster management
section for more information on common operational tasks like scaling and deleting the cluster.
-
See the Package management
section for more information on post-creation curated packages installation.
3.3 - Create Nutanix production cluster
Create a production-quality cluster on Nutanix Cloud Infrastructure with AHV
EKS Anywhere supports a Nutanix Cloud Infrastructure (NCI) provider for production grade EKS Anywhere deployments.
This document walks you through setting up EKS Anywhere on Nutanix Cloud Infrastructure with AHV in a way that:
- Deploys an initial cluster in your Nutanix environment. That cluster can be used as a self-managed cluster (to run workloads) or a management cluster (to create and manage other clusters)
- Deploys zero or more workload clusters from the management cluster
If your initial cluster is a management cluster, it is intended to stay in place so you can use it later to modify, upgrade, and delete workload clusters.
Using a management cluster makes it faster to provision and delete workload clusters.
It also lets you keep NCI credentials for a set of clusters in one place: on the management cluster.
The alternative is to simply use your initial cluster to run workloads.
See Cluster topologies
for details.
Important
Creating an EKS Anywhere management cluster is the recommended model.
Separating management features into a separate, persistent management cluster
provides a cleaner model for managing the lifecycle of workload clusters (to create, upgrade, and delete clusters), while workload clusters run user applications.
This approach also reduces provider permissions for workload clusters.
Prerequisite Checklist
EKS Anywhere needs to:
Also, see the Ports and protocols
page for information on ports that need to be accessible from control plane, worker, and Admin machines.
Steps
The following steps are divided into two sections:
- Create an initial cluster (used as a management or self-managed cluster)
- Create zero or more workload clusters from the management cluster
Create an initial cluster
Follow these steps to create an EKS Anywhere cluster that can be used either as a management cluster or as a self-managed cluster (for running workloads itself).
-
Generate an initial cluster config (named mgmt
for this example):
CLUSTER_NAME=mgmt
eksctl anywhere generate clusterconfig $CLUSTER_NAME \
--provider nutanix > eksa-mgmt-cluster.yaml
-
Modify the initial cluster config (eksa-mgmt-cluster.yaml
) as follows:
- Refer to Nutanix configuration
for information on configuring this cluster config for a Nutanix provider.
- Add Optional
configuration settings as needed.
- Create at least three control plane nodes, and three worker nodes for a production cluster, to provide high availability and rolling upgrades.
-
Set Credential Environment Variables
Before you create the initial cluster, you will need to set and export these environment variables for your Nutanix Prism Central user name and password.
Make sure you use single quotes around the values so that your shell does not interpret the values:
export EKSA_NUTANIX_USERNAME='billy'
export EKSA_NUTANIX_PASSWORD='t0p$ecret'
Note
If you have a username in the form of domain_name/user_name
, you must specify it as user_name@domain_name
to
avoid errors in cluster creation. For example, nutanix.local/admin
should be specified as admin@nutanix.local
.
-
Set License Environment Variable
Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster
if you are licensing an existing cluster):
export EKSA_LICENSE='my-license-here'
After you have created your eksa-mgmt-cluster.yaml
and set your credential environment variables, you will be ready to create the cluster.
-
Configure Curated Packages
The Amazon EKS Anywhere Curated Packages are only available to customers with the Amazon EKS Anywhere Enterprise Subscription. To request a free trial, talk to your Amazon representative or connect with one here
. Cluster creation will succeed if authentication is not set up, but some warnings may be generated. Detailed package configurations can be found here
.
If you are going to use packages, set up authentication. These credentials should have limited capabilities
:
export EKSA_AWS_ACCESS_KEY_ID="your*access*id"
export EKSA_AWS_SECRET_ACCESS_KEY="your*secret*key"
export EKSA_AWS_REGION="us-west-2"
-
Create cluster
eksctl anywhere create cluster \
# --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
-f eksa-mgmt-cluster.yaml
-
Once the cluster is created, you can access it with the generated KUBECONFIG
file in your local directory:
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
-
Check the cluster nodes:
To check that the cluster is ready, list the machines to see the control plane, and worker nodes:
kubectl get machines -n eksa-system
Example command output
NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
mgmt-4gtt2 mgmt mgmt-control-plane-1670343878900-2m4ln nutanix://xxxx Running 11m v1.24.7-eks-1-24-4
mgmt-d42xn mgmt mgmt-control-plane-1670343878900-jbfxt nutanix://xxxx Running 11m v1.24.7-eks-1-24-4
mgmt-md-0-9868m mgmt mgmt-md-0-1670343878901-lkmxw nutanix://xxxx Running 11m v1.24.7-eks-1-24-4
mgmt-md-0-njpk2 mgmt mgmt-md-0-1670343878901-9clbz nutanix://xxxx Running 11m v1.24.7-eks-1-24-4
mgmt-md-0-p4gp2 mgmt mgmt-md-0-1670343878901-mbktx nutanix://xxxx Running 11m v1.24.7-eks-1-24-4
mgmt-zkwrr mgmt mgmt-control-plane-1670343878900-jrdkk nutanix://xxxx Running 11m v1.24.7-eks-1-24-4
-
Check the initial cluster’s CRD:
To ensure you are looking at the initial cluster, list the cluster CRD to see that the name of its management cluster is itself:
kubectl get clusters mgmt -o yaml
Example command output
...
kubernetesVersion: "1.24"
managementCluster:
name: mgmt
workerNodeGroupConfigurations:
...
Note
The initial cluster is now ready to deploy workload clusters.
However, if you just want to use it to run workloads, you can deploy pod workloads directly on the initial cluster without deploying a separate workload cluster and skip the section on running separate workload clusters.
To make sure the cluster is ready to run workloads, run the test application in the
Deploy test workload section.
Create separate workload clusters
Follow these steps if you want to use your initial cluster to create and manage separate workload clusters.
-
Generate a workload cluster config:
CLUSTER_NAME=w01
eksctl anywhere generate clusterconfig $CLUSTER_NAME \
--provider nutanix > eksa-w01-cluster.yaml
Refer to the initial config described earlier for the required and optional settings.
Ensure workload cluster object names (Cluster
, NutanixDatacenterConfig
, NutanixMachineConfig
, etc.) are distinct from management cluster object names. Be sure to set the managementCluster
field to identify the name of the management cluster.
-
Set License Environment Variable
Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster
if you are licensing an existing cluster):
export EKSA_LICENSE='my-license-here'
-
Create a workload cluster
To create a new workload cluster from your management cluster run this command, identifying:
- The workload cluster YAML file
- The initial cluster’s kubeconfig (this causes the workload cluster to be managed from the management cluster)
eksctl anywhere create cluster \
-f eksa-w01-cluster.yaml \
# --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
--kubeconfig mgmt/mgmt-eks-a-cluster.kubeconfig
As noted earlier, adding the --kubeconfig
option tells eksctl
to use the management cluster identified by that kubeconfig file to create a different workload cluster.
-
Check the workload cluster:
You can now use the workload cluster as you would any Kubernetes cluster.
Change your kubeconfig to point to the new workload cluster (for example, w01
), then run the test application with:
export CLUSTER_NAME=w01
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
Verify the test application in the deploy test application section.
-
Add more workload clusters:
To add more workload clusters, go through the same steps for creating the initial workload, copying the config file to a new name (such as eksa-w02-cluster.yaml
), modifying resource names, and running the create cluster command again.
Next steps:
-
See the Cluster management
section for more information on common operational tasks like scaling and deleting the cluster.
-
See the Package management
section for more information on post-creation curated packages installation.
3.4 - Create vSphere production cluster
Create a production-quality cluster on VMware vSphere
EKS Anywhere supports a VMware vSphere provider for production grade EKS Anywhere deployments.
This document walks you through setting up EKS Anywhere on vSphere in a way that:
- Deploys an initial cluster on your vSphere environment. That cluster can be used as a self-managed cluster (to run workloads) or a management cluster (to create and manage other clusters)
- Deploys zero or more workload clusters from the management cluster
If your initial cluster is a management cluster, it is intended to stay in place so you can use it later to modify, upgrade, and delete workload clusters.
Using a management cluster makes it faster to provision and delete workload clusters.
Also it lets you keep vSphere credentials for a set of clusters in one place: on the management cluster.
The alternative is to simply use your initial cluster to run workloads.
See Cluster topologies
for details.
Important
Creating an EKS Anywhere management cluster is the recommended model.
Separating management features into a separate, persistent management cluster
provides a cleaner model for managing the lifecycle of workload clusters (to create, upgrade, and delete clusters), while workload clusters run user applications.
This approach also reduces provider permissions for workload clusters.
Prerequisite Checklist
EKS Anywhere needs to:
Also, see the Ports and protocols
page for information on ports that need to be accessible from control plane, worker, and Admin machines.
Steps
The following steps are divided into two sections:
- Create an initial cluster (used as a management or self-managed cluster)
- Create zero or more workload clusters from the management cluster
Create an initial cluster
Follow these steps to create an EKS Anywhere cluster that can be used either as a management cluster or as a self-managed cluster (for running workloads itself).
-
Generate an initial cluster config (named mgmt
for this example):
CLUSTER_NAME=mgmt
eksctl anywhere generate clusterconfig $CLUSTER_NAME \
--provider vsphere > eksa-mgmt-cluster.yaml
-
Modify the initial cluster config (eksa-mgmt-cluster.yaml
) as follows:
- Refer to vsphere configuration
for information on configuring this cluster config for a vSphere provider.
- Add Optional
configuration settings as needed.
See Github provider
to see how to identify your Git information.
- Create at least two control plane nodes, three worker nodes, and three etcd nodes for a production cluster, to provide high availability and rolling upgrades.
-
Set Credential Environment Variables
Before you create the initial cluster, you will need to set and export these environment variables for your vSphere user name and password.
Make sure you use single quotes around the values so that your shell does not interpret the values:
export EKSA_VSPHERE_USERNAME='billy'
export EKSA_VSPHERE_PASSWORD='t0p$ecret'
Note
If you have a username in the form of domain_name/user_name
, you must specify it as user_name@domain_name
to
avoid errors in cluster creation. For example, vsphere.local/admin
should be specified as admin@vsphere.local
.
-
Set License Environment Variable
Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster
if you are licensing an existing cluster):
export EKSA_LICENSE='my-license-here'
After you have created your eksa-mgmt-cluster.yaml
and set your credential environment variables, you will be ready to create the cluster.
-
Configure Curated Packages
The Amazon EKS Anywhere Curated Packages are only available to customers with the Amazon EKS Anywhere Enterprise Subscription. To request a free trial, talk to your Amazon representative or connect with one here
. Cluster creation will succeed if authentication is not set up, but some warnings may be genered. Detailed package configurations can be found here
.
If you are going to use packages, set up authentication. These credentials should have limited capabilities
:
export EKSA_AWS_ACCESS_KEY_ID="your*access*id"
export EKSA_AWS_SECRET_ACCESS_KEY="your*secret*key"
export EKSA_AWS_REGION="us-west-2"
-
Create cluster
eksctl anywhere create cluster \
# --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
-f eksa-mgmt-cluster.yaml
-
Once the cluster is created you can use it with the generated KUBECONFIG
file in your local directory:
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
-
Check the cluster nodes:
To check that the cluster completed, list the machines to see the control plane, etcd, and worker nodes:
Example command output
NAMESPACE NAME PROVIDERID PHASE VERSION
eksa-system mgmt-b2xyz vsphere:/xxxxx Running v1.24.2-eks-1-24-5
eksa-system mgmt-etcd-r9b42 vsphere:/xxxxx Running
eksa-system mgmt-md-8-6xr-rnr vsphere:/xxxxx Running v1.24.2-eks-1-24-5
...
The etcd machine doesn’t show the Kubernetes version because it doesn’t run the kubelet service.
-
Check the initial cluster’s CRD:
To ensure you are looking at the initial cluster, list the CRD to see that the name of its management cluster is itself:
kubectl get clusters mgmt -o yaml
Example command output
...
kubernetesVersion: "1.24"
managementCluster:
name: mgmt
workerNodeGroupConfigurations:
...
Note
The initial cluster is now ready to deploy workload clusters.
However, if you just want to use it to run workloads, you can deploy pod workloads directly on the initial cluster without deploying a separate workload cluster and skip the section on running separate workload clusters.
To make sure the cluster is ready to run workloads, run the test application in the
Deploy test workload section.
Create separate workload clusters
Follow these steps if you want to use your initial cluster to create and manage separate workload clusters.
-
Generate a workload cluster config:
CLUSTER_NAME=w01
eksctl anywhere generate clusterconfig $CLUSTER_NAME \
--provider vsphere > eksa-w01-cluster.yaml
Refer to the initial config described earlier for the required and optional settings.
NOTE: Ensure workload cluster object names (Cluster
, vSphereDatacenterConfig
, vSphereMachineConfig
, etc.) are distinct from management cluster object names. Be sure to set the managementCluster
field to identify the name of the management cluster.
-
Set License Environment Variable
Add a license to any cluster for which you want to receive paid support. If you are creating a licensed cluster, set and export the license variable (see License cluster
if you are licensing an existing cluster):
export EKSA_LICENSE='my-license-here'
-
Create a workload cluster in one of the following ways:
See Manage cluster with GitOps
for more details.
- eksctl CLI: Useful for temporary cluster configurations. To create a workload cluster with
eksctl
, run:
eksctl anywhere create cluster \
-f eksa-w01-cluster.yaml \
# --install-packages packages.yaml \ # uncomment to install curated packages at cluster creation
--kubeconfig mgmt/mgmt-eks-a-cluster.kubeconfig
As noted earlier, adding the --kubeconfig
option tells eksctl
to use the management cluster identified by that kubeconfig file to create a different workload cluster.
-
To check the workload cluster, get the workload cluster credentials and run a test workload:
-
If your workload cluster was created with eksctl
,
change your credentials to point to the new workload cluster (for example, w01
), then run the test application with:
export CLUSTER_NAME=w01
export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig
kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
-
If your workload cluster was created with GitOps, you can get credentials and run the test application as follows:
kubectl get secret -n eksa-system w01-kubeconfig -o jsonpath=‘{.data.value}' | base64 —decode > w01.kubeconfig
export KUBECONFIG=w01.kubeconfig
kubectl apply -f "https://anywhere.eks.amazonaws.com/manifests/hello-eks-a.yaml"
-
Add more workload clusters:
To add more workload clusters, go through the same steps for creating the initial workload, copying the config file to a new name (such as eksa-w02-cluster.yaml
), modifying resource names, and running the create cluster command again.
Next steps:
-
See the Cluster management
section for more information on common operational tasks like scaling and deleting the cluster.
-
See the Package management
section for more information on post-creation curated packages installation.