Amazon Elastic Kubernetes Service (Amazon EKS) is a managed Kubernetes as a Service on AWS. IMAO, Google Cloud's GKE is still the best choice of managed Kubernetes service if you're not stuck in AWS.
ref:
https://docs.aws.amazon.com/eks/latest/userguide/eks-ug.pdf
https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html
Also see:
https://vinta.ws/code/the-complete-guide-to-google-kubernetes-engine-gke.html
Installation
We need to install some command-line tools: aws
, eksctl
and kubectl
.
brew tap weaveworks/tap
brew install awscli weaveworks/tap/eksctl kubernetes-cli
k9s
and fubectl
are also recommended which provides fancy terminal UIs to interact with your Kubernetes clusters.
brew install k9s
curl -LO https://rawgit.com/kubermatic/fubectl/master/fubectl.source
source <path-to>/fubectl.source
ref:
https://github.com/derailed/k9s
https://github.com/kubermatic/fubectl
Create Cluster
We use a ClusterConfig
to define our cluster.
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: perp-staging
region: ap-northeast-1
# All workloads in the "fargate" Kubernetes namespace will be scheduled onto Fargate
fargateProfiles:
- name: fp-default
selectors:
- namespace: fargate
# https://eksctl.io/usage/schema/
managedNodeGroups:
- name: managed-ng-m5-4xlarge
instanceType: m5.4xlarge
instancePrefix: m5-4xlarge
minSize: 1
maxSize: 5
desiredCapacity: 3
volumeSize: 100
iam:
withAddonPolicies:
cloudWatch: true
albIngress: true
ebs: true
efs: true
# Enable envelope encryption for Kubernetes Secrets
secretsEncryption:
keyARN: "arn:aws:kms:YOUR_KMS_ARN"
# Enable CloudWatch logging for Kubernetes components
cloudWatch:
clusterLogging:
enableTypes: ["*"]
Create the cluster.
eksctl create cluster --profile=perp -f clusterconfig.yaml
ref:
https://eksctl.io/usage/creating-and-managing-clusters/
https://github.com/weaveworks/eksctl/tree/master/examples
We can also use the same config file to update our cluster, but not all configurations are supported currently.
eksctl upgrade cluster --profile=perp -f clusterconfig.yaml
Access Cluster
aws eks --profile=perp update-kubeconfig \
--region ap-northeast-1 \
--name perp-staging \
--alias vinta@perp-staging
ref:
https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html
Delete Cluster
# You might need to manually delete/detach following resources first:
# Detach non-default policies for FargatePodExecutionRole and NodeInstanceRole
# Fargate Profile
# EC2 Network Interfaces
# EC2 ALB
eksctl delete cluster --profile=perp \
--region ap-northeast-1 \
--name perp-staging
Then you can delete the CloudFormation stack on AWS Management Console.
Cluster Authentication
kubectl get configmap aws-auth -n kube-system -o yaml
We must copy mapRoles
from the above ConfigMap, and add the mapUsers
section:
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
# NOTE: mapRoles are copied from "kubectl get configmap aws-auth -n kube-system -o yaml"
mapRoles: |
- rolearn: YOUR_ARN_FargatePodExecutionRole
username: system:node:{{SessionName}}
groups:
- system:bootstrappers
- system:nodes
- system:node-proxier
- rolearn: YOUR_ARN_NodeInstanceRole
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
# Only IAM users listed here can access this cluster
mapUsers: |
- userarn: YOUR_USER_ARN
username: YOUR_AWS_USERNAME
groups:
- system:masters
kubectl apply -f aws-auth.yaml
kubectl describe configmap -n kube-system aws-auth
ref:
https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html
https://aws.amazon.com/premiumsupport/knowledge-center/amazon-eks-cluster-access/
Setup Container Logging for Fargate Nodes
Create an IAM policy and attach the IAM policy to the pod execution role specified for your Fargate profile. The --role-name
should be the name of FargatePodExecutionRole
, you can find it under "Resources" tab in the CloudFormation stack of your EKS cluster.
curl -so permissions.json https://raw.githubusercontent.com/aws-samples/amazon-eks-fluent-logging-examples/mainline/examples/fargate/cloudwatchlogs/permissions.json
aws iam create-policy --profile=perp \
--policy-name eks-fargate-logging-policy \
--policy-document file://permissions.json
aws iam attach-role-policy --profile=perp \
--policy-arn arn:aws:iam::XXX:policy/eks-fargate-logging-policy \
--role-name eksctl-perp-staging-cluste-FargatePodExecutionRole-XXX
Configure Kubernetes to send container logs on Fargate nodes to CloudWatch via Fluent Bit.
kind: Namespace
apiVersion: v1
metadata:
name: aws-observability
labels:
aws-observability: enabled
---
kind: ConfigMap
apiVersion: v1
metadata:
name: aws-logging
namespace: aws-observability
data:
output.conf: |
[OUTPUT]
Name cloudwatch_logs
Match *
region ap-northeast-1
log_group_name /aws/eks/perp-staging/containers
log_stream_prefix fluent-bit-
auto_create_group On
kubectl apply -f aws-logs-fargate.yaml
ref:
https://docs.aws.amazon.com/eks/latest/userguide/fargate-logging.html
https://docs.fluentbit.io/manual/pipeline/outputs/cloudwatch
Setup Container Logging for EC2 Nodes (CloudWatch Container Insights)
Deploy Fluent Bit as DaemonSet to send container logs to CloudWatch Logs.
ClusterName=perp-staging
RegionName=ap-northeast-1
FluentBitHttpPort='2020'
FluentBitReadFromHead='Off'
[[ ${FluentBitReadFromHead} = 'On' ]] && FluentBitReadFromTail='Off'|| FluentBitReadFromTail='On'
[[ -z ${FluentBitHttpPort} ]] && FluentBitHttpServer='Off' || FluentBitHttpServer='On'
curl -s https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/quickstart/cwagent-fluent-bit-quickstart.yaml | sed 's/{{cluster_name}}/'${ClusterName}'/;s/{{region_name}}/'${RegionName}'/;s/{{http_server_toggle}}/"'${FluentBitHttpServer}'"/;s/{{http_server_port}}/"'${FluentBitHttpPort}'"/;s/{{read_from_head}}/"'${FluentBitReadFromHead}'"/;s/{{read_from_tail}}/"'${FluentBitReadFromTail}'"/' > aws-logs-ec2.yaml
kubectl apply -f aws-logs-ec2.yaml