Amazon Elastic Kubernetes Service (EKS) is a managed Kubernetes as a Service on AWS === Google Kubernetes Engine (GKE) on Google Cloud.
ref:
https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html
https://vinta.ws/code/the-complete-guide-to-google-kubernetes-engine-gke.html
Install CLI Tools
You need to install some command-line tools. Moreover, it would be better that you use the same Kubernetes version between client and server. Otherwise, you will get something like: WARNING: version difference between client (1.31) and server (1.33) exceeds the supported minor version skew of +/-1
.
# kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/$(arch)/kubectl"
curl -LO "https://dl.k8s.io/release/v1.32.7/bin/darwin/arm64/kubectl"
sudo install -m 0755 kubectl /usr/local/bin
# eksctl
curl -LO "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_$(arch).tar.gz"
tar -xzf "eksctl_$(uname -s)_$(arch).tar.gz"
sudo install -m 0755 eksctl /usr/local/bin
# awscli
curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"
sudo installer -pkg AWSCLIV2.pkg -target /
ref:
https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/
https://github.com/eksctl-io/eksctl
https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html
The following tools are also recommended which provide fancy terminal UIs to interact with your Kubernetes clusters.
# k9s
brew install derailed/k9s/k9s
# fubectl
curl -LO https://rawgit.com/kubermatic/fubectl/master/fubectl.source
source <path-to>/fubectl.source
ref:
https://github.com/derailed/k9s
https://github.com/kubermatic/fubectl
Create EKS Cluster
Use ClusterConfig
to define the cluster.
# https://eksctl.io/usage/schema/
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: your-cluster
region: ap-northeast-1
version: "1.33"
# https://eksctl.io/usage/fargate-support/
fargateProfiles:
- name: fp-default
selectors:
# all workloads in "fargate" Kubernetes namespace will be scheduled onto Fargate
- namespace: fargate
# https://eksctl.io/usage/nodegroup-managed/
managedNodeGroups:
- name: mng-m5-xlarge
instanceType: m5.xlarge
spot: true
minSize: 1
maxSize: 10
desiredCapacity: 2
volumeSize: 100
nodeRepairConfig:
enabled: true
iam:
withAddonPolicies:
autoScaler: true
awsLoadBalancerController: true
cloudWatch: true
ebs: true
attachPolicyARNs:
# default node policies (required when explicitly setting attachPolicyARNs)
- arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
- arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryPullOnly
# custom policies
- arn:aws:iam::xxx:policy/your-custom-policy
labels:
service-node: "true"
# https://eksctl.io/usage/cloudwatch-cluster-logging/
cloudWatch:
clusterLogging:
enableTypes: ["api", "audit", "authenticator"]
logRetentionInDays: 30
# https://eksctl.io/usage/addons/
addons:
# networking addons are enabled by default
# - name: kube-proxy
# - name: coredns
# - name: vpc-cni
- name: amazon-cloudwatch-observability
- name: aws-ebs-csi-driver
- name: eks-pod-identity-agent
- name: metrics-server
addonsConfig:
autoApplyPodIdentityAssociations: true
# https://eksctl.io/usage/iamserviceaccounts/
iam:
# not all addons support Pod Identity Association, so you probably still need OIDC
withOIDC: true
# https://eksctl.io/usage/kms-encryption/
secretsEncryption:
keyARN: arn:aws:kms:YOUR_ARN
# preview
AWS_PROFILE=perp eksctl create cluster -f cluster-config.yaml --dry-run
# create
eksctl create cluster -f cluster-config.yaml --profile=perp
If a nodegroup includes attachPolicyARNs
, it must also include the default node policies, like AmazonEKSWorkerNodePolicy
, AmazonEKS_CNI_Policy
and AmazonEC2ContainerRegistryPullOnly
.
ref:
https://eksctl.io/usage/schema/
https://eksctl.io/usage/iam-policies/
https://github.com/weaveworks/eksctl/tree/main/examples
Managed Nodegroups
You could re-use ClusterConfig to create more managed nodegroups after cluster creation. However, you can only create: most fields under managedNodeGroups
section cannot be changed once created. If you need to tweak something, just add a new one.
Also, if you're not familiar with instance types on AWS, try Instance Selector which you only need to specify how much CPU, memory and GPU you want.
managedNodeGroups:
- name: mng-spot-2c4g
instanceSelector:
vCPUs: 2
memory: 4GiB
gpus: 0
spot: true
minSize: 1
maxSize: 5
desiredCapacity: 2
volumeSize: 100
nodeRepairConfig:
enabled: true
eksctl create nodegroup -f cluster-config.yaml --profile=perp
ref:
https://eksctl.io/usage/nodegroup-managed/
https://eksctl.io/usage/instance-selector/
Addons
When a cluster is created, EKS automatically installs vpc-cni
, coredns
and kube-proxy
as self-managed addons. Here are other common addons you probably want to install as well:
eks-pod-identity-agent
: Manages IAM permissions for pods using Pod Identity Associations instead of OIDC
amazon-cloudwatch-observability
: Collects and sends container metrics, logs, and traces to CloudWatch Container Insights (Logs Insights)
aws-ebs-csi-driver
: Enables pods to use EBS volumes for persistent storage through Kubernetes PersistentVolumes
metrics-server
: Provides resource metrics (CPU/memory) for HPA, VPA and kubectl top
commands
If not specified explicitly, addons will be created with a role that has all recommended policies attached.
It's worth noting that EKS Pod Identity Associations is AWS's newer, simpler way for Kubernetes pods to assume IAM roles, replacing the older IRSA (IAM Roles for Service Accounts) method.
ref:
https://eksctl.io/usage/addons/
https://eksctl.io/usage/pod-identity-associations/
https://eksctl.io/usage/iamserviceaccounts/
IAM Permissions
Pods run on regular nodes will behave as NodeInstanceRole
; pods run on Fargate nodes will use FargatePodExecutionRole
. If you would like to adjust IAM permissions for a nodegroup, use the following command to find out which IAM role it's using:
aws eks describe-nodegroup \
--region ap-northeast-1 \
--cluster-name your-cluster \
--nodegroup-name mng-m5-xlarge \
--query 'nodegroup.nodeRole' \
--output text \
--profile=perp
ref:
https://docs.aws.amazon.com/eks/latest/userguide/create-node-role.html
Access EKS Cluster
Instead of using the old aws-auth
ConfigMap, you could use Access Entries to manage users/roles between AWS IAM and Kubernetes. Though, the user who running eksctl create cluster
should remove him/her self from the access entries, EKS will add the user automatically.
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: your-cluster
region: ap-northeast-1
accessConfig:
authenticationMode: API
accessEntries:
# - principalARN: arn:aws:iam::xxx:user/user1
# accessPolicies:
# - policyARN: arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy
# accessScope:
# type: cluster
- principalARN: arn:aws:iam::xxx:user/user2
accessPolicies:
- policyARN: arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminPolicy
accessScope:
type: cluster
- principalARN: arn:aws:iam::xxx:user/user3
accessPolicies:
- policyARN: arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy
accessScope:
type: cluster
- principalARN: arn:aws:iam::xxx:user/production-eks-deploy
type: STANDARD
# list
eksctl get accessentry --region ap-northeast-1 --cluster your-cluster --profile perp
# create
eksctl create accessentry -f access-entries.yaml --profile perp
ref:
https://eksctl.io/usage/access-entries/
https://docs.aws.amazon.com/eks/latest/userguide/access-policies.html
Connect kubectl
to your EKS cluster by creating a kubeconfig
file.
# setup context for a cluster
aws eks update-kubeconfig --region ap-northeast-1 --name your-cluster --alias your-cluster --profile=perp
# switch cluster context
kubectl config use-context your-cluster
kubectl get nodes
cat ~/.kube/config
ref:
https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html
Upgrade EKS Cluster
After upgrading your EKS cluster on AWS Management Console, you might find kube-proxy
is still using the old image from previous Kubernetes - just upgrade them manually. Or if you are going to create an ARM-based nodegroup, you might also need to upgrade core addons first:
eksctl utils update-aws-node --region ap-northeast-1 --cluster your-cluster --approve --profile=perp
eksctl utils update-coredns --region ap-northeast-1 --cluster your-cluster --approve --profile=perp
eksctl utils update-kube-proxy --region ap-northeast-1 --cluster your-cluster --approve --profile=perp
kubectl get pods -n kube-system
If you got Error: ImagePullBackOff
, try the following commands and replace the version (and region) with the version listed in Latest available kube-proxy container image version for each Amazon EKS cluster version:
kubectl describe daemonset kube-proxy -n kube-system | grep Image
kubectl set image daemonset.apps/kube-proxy -n kube-system \
kube-proxy=602401143452.dkr.ecr.ap-northeast-1.amazonaws.com/eks/kube-proxy:v1.25.16-minimal-eksbuild.8
ref:
https://eksctl.io/usage/cluster-upgrade/
https://github.com/weaveworks/eksctl/issues/1088
Delete EKS Cluster
You might need to manually delete/detach following resources in order to delete the cluster:
- Detach non-default policies from
NodeInstanceRole
and FargatePodExecutionRole
- Fargate Profile
- EC2 Network Interfaces
- EC2 Load Balancer
eksctl delete cluster --region ap-northeast-1 --name your-cluster --disable-nodegroup-eviction --profile=perp