{"id":831,"date":"2021-02-20T16:06:05","date_gmt":"2021-02-20T08:06:05","guid":{"rendered":"https:\/\/vinta.ws\/code\/?p=831"},"modified":"2026-03-17T00:03:34","modified_gmt":"2026-03-16T16:03:34","slug":"amazon-eks-create-a-kubernetes-cluster-via-clusterconfig","status":"publish","type":"post","link":"https:\/\/vinta.ws\/code\/amazon-eks-create-a-kubernetes-cluster-via-clusterconfig.html","title":{"rendered":"Amazon EKS: Manage Kubernetes Cluster with ClusterConfig (2025 Version)"},"content":{"rendered":"<p>Amazon Elastic Kubernetes Service (EKS) is a managed Kubernetes as a Service on AWS === Google Kubernetes Engine (GKE) on Google Cloud.<\/p>\n<p>ref:<br \/>\n<a href=\"https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/getting-started.html\">https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/getting-started.html<\/a><br \/>\n<a href=\"https:\/\/vinta.ws\/code\/the-complete-guide-to-google-kubernetes-engine-gke.html\">https:\/\/vinta.ws\/code\/the-complete-guide-to-google-kubernetes-engine-gke.html<\/a><\/p>\n<h2>Install CLI Tools<\/h2>\n<p>You need to install some command-line tools. Moreover, it would be better that you use the same Kubernetes version between client and server. Otherwise, you will get something like: <code>WARNING: version difference between client (1.31) and server (1.33) exceeds the supported minor version skew of +\/-1<\/code>.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-bash\"># kubectl\ncurl -LO \"https:\/\/dl.k8s.io\/release\/$(curl -L -s https:\/\/dl.k8s.io\/release\/stable.txt)\/bin\/darwin\/$(arch)\/kubectl\"\ncurl -LO \"https:\/\/dl.k8s.io\/release\/v1.32.7\/bin\/darwin\/arm64\/kubectl\"\nsudo install -m 0755 kubectl \/usr\/local\/bin\n\n# eksctl\ncurl -LO \"https:\/\/github.com\/weaveworks\/eksctl\/releases\/latest\/download\/eksctl_$(uname -s)_$(arch).tar.gz\"\ntar -xzf \"eksctl_$(uname -s)_$(arch).tar.gz\"\nsudo install -m 0755 eksctl \/usr\/local\/bin\n\n# awscli\ncurl \"https:\/\/awscli.amazonaws.com\/AWSCLIV2.pkg\" -o \"AWSCLIV2.pkg\"\nsudo installer -pkg AWSCLIV2.pkg -target \/<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/tasks\/tools\/install-kubectl-macos\/\">https:\/\/kubernetes.io\/docs\/tasks\/tools\/install-kubectl-macos\/<\/a><br \/>\n<a href=\"https:\/\/github.com\/eksctl-io\/eksctl\">https:\/\/github.com\/eksctl-io\/eksctl<\/a><br \/>\n<a href=\"https:\/\/docs.aws.amazon.com\/cli\/latest\/userguide\/getting-started-install.html\">https:\/\/docs.aws.amazon.com\/cli\/latest\/userguide\/getting-started-install.html<\/a><\/p>\n<p>The following tools are\u00a0also recommended which provide fancy terminal UIs to interact with your Kubernetes clusters.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-bash\"># k9s\nbrew install derailed\/k9s\/k9s\n\n# fubectl\ncurl -LO https:\/\/rawgit.com\/kubermatic\/fubectl\/master\/fubectl.source\nsource &lt;path-to&gt;\/fubectl.source<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/github.com\/derailed\/k9s\">https:\/\/github.com\/derailed\/k9s<\/a><br \/>\n<a href=\"https:\/\/github.com\/kubermatic\/fubectl\">https:\/\/github.com\/kubermatic\/fubectl<\/a><\/p>\n<h2>Create EKS Cluster<\/h2>\n<p>Use <code>ClusterConfig<\/code> to define the cluster.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-yaml\"># https:\/\/eksctl.io\/usage\/schema\/\napiVersion: eksctl.io\/v1alpha5\nkind: ClusterConfig\nmetadata:\n  name: your-cluster\n  region: ap-northeast-1\n  version: \"1.33\"\n\n# https:\/\/eksctl.io\/usage\/fargate-support\/\nfargateProfiles:\n  - name: fp-default\n    selectors:\n      # all workloads in \"fargate\" Kubernetes namespace will be scheduled onto Fargate\n      - namespace: fargate\n\n# https:\/\/eksctl.io\/usage\/nodegroup-managed\/\nmanagedNodeGroups:\n  - name: mng-m5-xlarge\n    instanceType: m5.xlarge\n    spot: true\n    minSize: 1\n    maxSize: 10\n    desiredCapacity: 2\n    volumeSize: 100\n    nodeRepairConfig:\n      enabled: true\n    iam:\n      withAddonPolicies:\n        autoScaler: true\n        awsLoadBalancerController: true\n        cloudWatch: true\n        ebs: true\n      attachPolicyARNs:\n        # default node policies (required when explicitly setting attachPolicyARNs)\n        - arn:aws:iam::aws:policy\/AmazonEKSWorkerNodePolicy\n        - arn:aws:iam::aws:policy\/AmazonEKS_CNI_Policy\n        - arn:aws:iam::aws:policy\/AmazonEC2ContainerRegistryPullOnly\n        # custom policies\n        - arn:aws:iam::xxx:policy\/your-custom-policy\n    labels:\n      service-node: \"true\"\n\n# https:\/\/eksctl.io\/usage\/cloudwatch-cluster-logging\/\ncloudWatch:\n  clusterLogging:\n    enableTypes: [\"api\", \"audit\", \"authenticator\"]\n    logRetentionInDays: 30\n\n# https:\/\/eksctl.io\/usage\/addons\/\naddons:\n  # networking addons are enabled by default\n  # - name: kube-proxy\n  # - name: coredns\n  # - name: vpc-cni\n  - name: amazon-cloudwatch-observability\n  - name: aws-ebs-csi-driver\n  - name: eks-pod-identity-agent\n  - name: metrics-server\naddonsConfig:\n  autoApplyPodIdentityAssociations: true\n\n# https:\/\/eksctl.io\/usage\/iamserviceaccounts\/\niam:\n  # not all addons support Pod Identity Association, so you probably still need OIDC\n  withOIDC: true\n\n# https:\/\/eksctl.io\/usage\/kms-encryption\/\nsecretsEncryption:\n  keyARN: arn:aws:kms:YOUR_ARN<\/code><\/pre>\n<pre class=\"line-numbers\"><code class=\"language-bash\"># preview\nAWS_PROFILE=perp eksctl create cluster -f cluster-config.yaml --dry-run\n\n# create\neksctl create cluster -f cluster-config.yaml --profile=perp<\/code><\/pre>\n<p>If a nodegroup includes <code>attachPolicyARNs<\/code>, it <strong>must<\/strong> also include the default node policies, like <code>AmazonEKSWorkerNodePolicy<\/code>, <code>AmazonEKS_CNI_Policy<\/code> and <code>AmazonEC2ContainerRegistryPullOnly<\/code>.<\/p>\n<p>ref:<br \/>\n<a href=\"https:\/\/eksctl.io\/usage\/schema\/\">https:\/\/eksctl.io\/usage\/schema\/<\/a><br \/>\n<a href=\"https:\/\/eksctl.io\/usage\/iam-policies\/\">https:\/\/eksctl.io\/usage\/iam-policies\/<\/a><br \/>\n<a href=\"https:\/\/github.com\/weaveworks\/eksctl\/tree\/main\/examples\">https:\/\/github.com\/weaveworks\/eksctl\/tree\/main\/examples<\/a><\/p>\n<h3>Managed Nodegroups<\/h3>\n<p>You could re-use ClusterConfig to create more managed nodegroups after cluster creation. However, you can only create: most fields under <code>managedNodeGroups<\/code> section cannot be changed once created. If you need to tweak something, just add a new one.<\/p>\n<p>Also, if you're not familiar with instance types on AWS, try <strong>Instance Selector<\/strong> where you only need to specify how much CPU, memory, and GPU you want.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-yaml\">managedNodeGroups:\n  - name: mng-spot-2c4g\n    instanceSelector:\n      vCPUs: 2\n      memory: 4GiB\n      gpus: 0\n    spot: true\n    minSize: 1\n    maxSize: 5\n    desiredCapacity: 2\n    volumeSize: 100\n    nodeRepairConfig:\n      enabled: true<\/code><\/pre>\n<pre class=\"line-numbers\"><code class=\"language-bash\">eksctl create nodegroup -f cluster-config.yaml --profile=perp<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/eksctl.io\/usage\/nodegroup-managed\/\">https:\/\/eksctl.io\/usage\/nodegroup-managed\/<\/a><br \/>\n<a href=\"https:\/\/eksctl.io\/usage\/instance-selector\/\">https:\/\/eksctl.io\/usage\/instance-selector\/<\/a><\/p>\n<h3>Addons<\/h3>\n<p>When a cluster is created, EKS automatically installs <code>vpc-cni<\/code>, <code>coredns<\/code> and <code>kube-proxy<\/code> as self-managed addons. Here are other common addons you probably want to install as well:<\/p>\n<ul>\n<li><code>eks-pod-identity-agent<\/code>: Manages IAM permissions for pods using Pod Identity Associations instead of OIDC<\/li>\n<li><code>amazon-cloudwatch-observability<\/code>: Collects and sends container metrics, logs, and traces to CloudWatch Container Insights (Logs Insights)<\/li>\n<li><code>aws-ebs-csi-driver<\/code>: Enables pods to use EBS volumes for persistent storage through Kubernetes PersistentVolumes<\/li>\n<li><code>metrics-server<\/code>: Provides resource metrics (CPU\/memory) for HPA, VPA and <code>kubectl top<\/code> commands<\/li>\n<\/ul>\n<p>If not specified explicitly, addons will be created with a role that has all recommended policies attached.<\/p>\n<p>It's worth noting that <strong>EKS Pod Identity Associations<\/strong> is AWS's newer, simpler way for Kubernetes pods to assume IAM roles, replacing the older IRSA (IAM Roles for Service Accounts) method.<\/p>\n<p>ref:<br \/>\n<a href=\"https:\/\/eksctl.io\/usage\/addons\/\">https:\/\/eksctl.io\/usage\/addons\/<\/a><br \/>\n<a href=\"https:\/\/eksctl.io\/usage\/pod-identity-associations\/\">https:\/\/eksctl.io\/usage\/pod-identity-associations\/<\/a><br \/>\n<a href=\"https:\/\/eksctl.io\/usage\/iamserviceaccounts\/\">https:\/\/eksctl.io\/usage\/iamserviceaccounts\/<\/a><\/p>\n<h3>IAM Permissions<\/h3>\n<p>Pods running on regular nodes will behave as <code>NodeInstanceRole<\/code>; pods running on Fargate nodes will use <code>FargatePodExecutionRole<\/code>. If you would like to adjust IAM permissions for a nodegroup, use the following command to find out which IAM role it's using:<\/p>\n<pre class=\"line-numbers\"><code class=\"language-bash\">aws eks describe-nodegroup \n    --region ap-northeast-1 \n    --cluster-name your-cluster \n    --nodegroup-name mng-m5-xlarge \n    --query 'nodegroup.nodeRole' \n    --output text \n    --profile=perp<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/create-node-role.html\">https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/create-node-role.html<\/a><\/p>\n<h2>Access EKS Cluster<\/h2>\n<p>Instead of using the old <code>aws-auth<\/code> ConfigMap, you could use <strong>Access Entries<\/strong> to manage users\/roles between AWS IAM and Kubernetes. Though, the user who ran <code>eksctl create cluster<\/code> should remove himself\/herself from the access entries, EKS will add the user automatically.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-yaml\">apiVersion: eksctl.io\/v1alpha5\nkind: ClusterConfig\nmetadata:\n  name: your-cluster\n  region: ap-northeast-1\n\naccessConfig:\n  authenticationMode: API\n  accessEntries:\n    # - principalARN: arn:aws:iam::xxx:user\/user1\n    #   accessPolicies:\n    #     - policyARN: arn:aws:eks::aws:cluster-access-policy\/AmazonEKSClusterAdminPolicy\n    #       accessScope:\n    #         type: cluster\n    - principalARN: arn:aws:iam::xxx:user\/user2\n      accessPolicies:\n        - policyARN: arn:aws:eks::aws:cluster-access-policy\/AmazonEKSAdminPolicy\n          accessScope:\n            type: cluster\n    - principalARN: arn:aws:iam::xxx:user\/user3\n      accessPolicies:\n        - policyARN: arn:aws:eks::aws:cluster-access-policy\/AmazonEKSViewPolicy\n          accessScope:\n            type: cluster\n    - principalARN: arn:aws:iam::xxx:user\/production-eks-deploy\n      type: STANDARD<\/code><\/pre>\n<pre class=\"line-numbers\"><code class=\"language-bash\"># list\neksctl get accessentry --region ap-northeast-1 --cluster your-cluster --profile perp\n\n# create\neksctl create accessentry -f access-entries.yaml --profile perp<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/eksctl.io\/usage\/access-entries\/\">https:\/\/eksctl.io\/usage\/access-entries\/<\/a><br \/>\n<a href=\"https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/access-policies.html\">https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/access-policies.html<\/a><\/p>\n<p>Connect <code>kubectl<\/code> to your EKS cluster by creating a <code>kubeconfig<\/code> file.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-bash\"># setup context for a cluster\naws eks update-kubeconfig --region ap-northeast-1 --name your-cluster --alias your-cluster --profile=perp\n\n# switch cluster context\nkubectl config use-context your-cluster\n\nkubectl get nodes\n\ncat ~\/.kube\/config<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/create-kubeconfig.html\">https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/create-kubeconfig.html<\/a><\/p>\n<h2>Upgrade EKS Cluster<\/h2>\n<p>After upgrading your EKS cluster on AWS Management Console, you might find <code>kube-proxy<\/code> is still using the old image from the previous Kubernetes version - just upgrade them manually. Or if you are going to create an ARM-based nodegroup, you might also need to upgrade core addons first:<\/p>\n<pre class=\"line-numbers\"><code class=\"language-bash\">eksctl utils update-aws-node --region ap-northeast-1 --cluster your-cluster --approve --profile=perp\neksctl utils update-coredns --region ap-northeast-1 --cluster your-cluster --approve --profile=perp\neksctl utils update-kube-proxy --region ap-northeast-1 --cluster your-cluster --approve --profile=perp\n\nkubectl get pods -n kube-system<\/code><\/pre>\n<p>If you got <code>Error: ImagePullBackOff<\/code>, try the following commands and replace the version (and region) with the version listed in <a href=\"https:\/\/docs.aws.amazon.com\/eks\/latest\/userguide\/managing-kube-proxy.html#kube-proxy-latest-versions-table\">Latest available kube-proxy container image version for each Amazon EKS cluster version<\/a>:<\/p>\n<pre class=\"line-numbers\"><code class=\"language-bash\">kubectl describe daemonset kube-proxy -n kube-system | grep Image\nkubectl set image daemonset.apps\/kube-proxy -n kube-system \nkube-proxy=602401143452.dkr.ecr.ap-northeast-1.amazonaws.com\/eks\/kube-proxy:v1.25.16-minimal-eksbuild.8<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/eksctl.io\/usage\/cluster-upgrade\/\">https:\/\/eksctl.io\/usage\/cluster-upgrade\/<\/a><br \/>\n<a href=\"https:\/\/github.com\/weaveworks\/eksctl\/issues\/1088\">https:\/\/github.com\/weaveworks\/eksctl\/issues\/1088<\/a><\/p>\n<h2>Delete EKS Cluster<\/h2>\n<p>You might need to manually delete\/detach the following resources in order to delete the cluster:<\/p>\n<ul>\n<li>Detach non-default policies from\u00a0<code>NodeInstanceRole<\/code> and <code>FargatePodExecutionRole<\/code> <\/li>\n<li>Fargate Profile<\/li>\n<li>EC2 Network Interfaces<\/li>\n<li>EC2 Load Balancer<\/li>\n<\/ul>\n<pre class=\"line-numbers\"><code class=\"language-bash\">eksctl delete cluster --region ap-northeast-1 --name your-cluster --disable-nodegroup-eviction --profile=perp<\/code><\/pre>\n","protected":false},"excerpt":{"rendered":"<p>Amazon Elastic Kubernetes Service (Amazon EKS) is a managed Kubernetes as a Service on AWS. IMAO, Google Cloud's GKE is still the best choice for managed Kubernetes service if you're not stuck in AWS.<\/p>\n","protected":false},"author":1,"featured_media":832,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[38,116],"tags":[16,136,123],"class_list":["post-831","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-about-devops","category-about-web-development","tag-amazon-web-services","tag-aws-eks","tag-kubernetes"],"_links":{"self":[{"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/posts\/831","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/comments?post=831"}],"version-history":[{"count":0,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/posts\/831\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/media\/832"}],"wp:attachment":[{"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/media?parent=831"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/categories?post=831"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/tags?post=831"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}