Amazon EKS: Setup kubernetes-external-secrets with AWS Secret Manager

Amazon EKS: Setup kubernetes-external-secrets with AWS Secret Manager

kubernetes-external-secrets allows you to use external secret management systems, like AWS Secrets Manager, to securely add Secrets in Kubernetes, so Pods can access Secrets normally.

ref:
https://github.com/external-secrets/kubernetes-external-secrets

AWS Secret Manager

For instance, we create a secret named YOUR_SECRET on AWS Secret Manager in the same region as our EKS cluster, using DefaultEncryptionKey as the encryption key. The content of the secret entity look like:

{
  "KEY_1": "VALUE_1",
  "KEY_2": "VALUE_2",
}

We can retrieve the secret value:

aws secretsmanager get-secret-value --profile=perp \
--region ap-northeast-1 \
--secret-id YOUR_SECRET

kubernetes-external-secrets

For kubernetes-external-secrets to work properly, it must be granted access to AWS Secrets Manager. To achieve that, we need to create an IAM role for kubernetes-external-secrets' service account.

ref:
https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html

Configure Secrets Backends

Create an IAM OIDC provider for the cluster:

eksctl utils associate-iam-oidc-provider --profile=perp \
--region ap-northeast-1 \
--cluster perp-staging \
--approve

aws iam list-open-id-connect-providers --profile=perp

ref:
https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html

Create an IAM policy that allows the role to access all secrets we created on AWS Secret Manager:

AWS_ACCOUNT_ID=$(aws sts get-caller-identity --profile=perp --query "Account" --output text)

cat <<EOF > policy.json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "secretsmanager:GetResourcePolicy",
        "secretsmanager:GetSecretValue",
        "secretsmanager:DescribeSecret",
        "secretsmanager:ListSecretVersionIds"
      ],
      "Resource": [
        "arn:aws:secretsmanager:ap-northeast-1:${AWS_ACCOUNT_ID}:secret:*"
      ]
    }
  ]
}
EOF

aws iam create-policy --profile=perp \
--policy-name perp-staging-secrets-policy --policy-document file://policy.json

Attach the above IAM policy to an IAM role, and define AssumeRole for the service account external-secrets-kubernetes-external-secrets which will be created later:

AWS_ACCOUNT_ID=$(aws sts get-caller-identity --profile=perp --query "Account" --output text)
OIDC_PROVIDER=$(aws eks describe-cluster --profile=perp --name perp-staging --region ap-northeast-1 --query "cluster.identity.oidc.issuer" --output text | sed -e "s/^https:\/\///") 

cat <<EOF > trust.json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "${OIDC_PROVIDER}:aud": "sts.amazonaws.com",
          "${OIDC_PROVIDER}:sub": "system:serviceaccount:default:external-secrets-kubernetes-external-secrets"
        }
      }
    }
  ]
}
EOF

aws iam create-role --profile=perp \
--role-name perp-staging-secrets-role \
--assume-role-policy-document file://trust.json

aws iam attach-role-policy --profile=perp \
--role-name perp-staging-secrets-role \
--policy-arn YOUR_POLICY_ARN

ref:
https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html
https://gist.github.com/lukaszbudnik/f1f42bd5a57430e3c25034200ba44c2e

Deploy kubernetes-external-secrets Controller

helm repo add external-secrets https://external-secrets.github.io/kubernetes-external-secrets/

helm install external-secrets \
external-secrets/kubernetes-external-secrets \
--skip-crds \
--set env.AWS_REGION=ap-northeast-1 \
--set securityContext.fsGroup=65534 \
--set serviceAccount.annotations."eks\.amazonaws\.com/role-arn"='YOUR_ROLE_ARN'

helm list

It would automatically create a service account named external-secrets-kubernetes-external-secrets in Kubernetes.

ref:
https://github.com/external-secrets/kubernetes-external-secrets/tree/master/charts/kubernetes-external-secrets

Deploy ExternalSecret

ExternalSecret app-secrets will generate a Secret object with the same name, and the content would look like:

apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
  name: example-secret
spec:
  backendType: secretsManager
  region: ap-northeast-1
  dataFrom:
    - YOUR_SECRET
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: example-app
  template:
    metadata:
      labels:
        app: example-app
    spec:
      containers:
      - name: example-app
        image: busybox:latest
        envFrom:
        - secretRef:
            name: example-secret
kubectl get secret example-secret -o jsonpath="{.data.KEY_1}" | base64 --decode

ref:
https://gist.github.com/lukaszbudnik/f1f42bd5a57430e3c25034200ba44c2e

Amazon EKS: Create a Kubernetes cluster via ClusterConfig

Amazon EKS: Create a Kubernetes cluster via ClusterConfig

Amazon Elastic Kubernetes Service (Amazon EKS) is a managed Kubernetes as a Service on AWS. IMAO, Google Cloud's GKE is still the best choice of managed Kubernetes service if you're not stuck in AWS.

ref:
https://docs.aws.amazon.com/eks/latest/userguide/eks-ug.pdf
https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html

Also see:
https://vinta.ws/code/the-complete-guide-to-google-kubernetes-engine-gke.html

Installation

We need to install some command-line tools: aws, eksctl and kubectl.

brew tap weaveworks/tap
brew install awscli weaveworks/tap/eksctl kubernetes-cli

k9s and fubectl are also recommended which provides fancy terminal UIs to interact with your Kubernetes clusters.

brew install k9s

curl -LO https://rawgit.com/kubermatic/fubectl/master/fubectl.source
source <path-to>/fubectl.source

ref:
https://github.com/derailed/k9s
https://github.com/kubermatic/fubectl

Create Cluster

We use a ClusterConfig to define our cluster.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: perp-staging
  region: ap-northeast-1
# All workloads in the "fargate" Kubernetes namespace will be scheduled onto Fargate
fargateProfiles:
  - name: fp-default
    selectors:
      - namespace: fargate
# https://eksctl.io/usage/schema/
managedNodeGroups:
  - name: managed-ng-m5-4xlarge
    instanceType: m5.4xlarge
    instancePrefix: m5-4xlarge
    minSize: 1
    maxSize: 5
    desiredCapacity: 3
    volumeSize: 100
    iam:
      withAddonPolicies:
        cloudWatch: true
        albIngress: true
        ebs: true
        efs: true
# Enable envelope encryption for Kubernetes Secrets
secretsEncryption:
  keyARN: "arn:aws:kms:YOUR_KMS_ARN"
# Enable CloudWatch logging for Kubernetes components
cloudWatch:
  clusterLogging:
    enableTypes: ["*"]

Create the cluster.

eksctl create cluster --profile=perp -f clusterconfig.yaml

ref:
https://eksctl.io/usage/creating-and-managing-clusters/
https://github.com/weaveworks/eksctl/tree/master/examples

We can also use the same config file to update our cluster, but not all configurations are supported currently.

eksctl upgrade cluster --profile=perp -f clusterconfig.yaml

Access Cluster

aws eks --profile=perp update-kubeconfig \
--region ap-northeast-1 \
--name perp-staging \
--alias [email protected]

ref:
https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html

Delete Cluster

# You might need to manually delete/detach following resources first:
# Detach non-default policies for FargatePodExecutionRole and NodeInstanceRole
# Fargate Profile
# EC2 Network Interfaces 
# EC2 ALB
eksctl delete cluster --profile=perp \
--region ap-northeast-1 \
--name perp-staging

Then you can delete the CloudFormation stack on AWS Management Console.

Cluster Authentication

kubectl get configmap aws-auth -n kube-system -o yaml

We must copy mapRoles from the above ConfigMap, and add the mapUsers section:

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  # NOTE: mapRoles are copied from "kubectl get configmap aws-auth -n kube-system -o yaml"
  mapRoles: |
    - rolearn: YOUR_ARN_FargatePodExecutionRole
      username: system:node:{{SessionName}}
      groups:
      - system:bootstrappers
      - system:nodes
      - system:node-proxier
    - rolearn: YOUR_ARN_NodeInstanceRole
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes
  # Only IAM users listed here can access this cluster
  mapUsers: |
    - userarn: YOUR_USER_ARN
      username: YOUR_AWS_USERNAME
      groups:
        - system:masters
kubectl apply -f aws-auth.yaml
kubectl describe configmap -n kube-system aws-auth

ref:
https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html
https://aws.amazon.com/premiumsupport/knowledge-center/amazon-eks-cluster-access/

Setup Container Logging for Fargate Nodes

Create an IAM policy and attach the IAM policy to the pod execution role specified for your Fargate profile. The --role-name should be the name of FargatePodExecutionRole, you can find it under "Resources" tab in the CloudFormation stack of your EKS cluster.

curl -so permissions.json https://raw.githubusercontent.com/aws-samples/amazon-eks-fluent-logging-examples/mainline/examples/fargate/cloudwatchlogs/permissions.json

aws iam create-policy --profile=perp \
--policy-name eks-fargate-logging-policy \
--policy-document file://permissions.json

aws iam attach-role-policy --profile=perp \
--policy-arn arn:aws:iam::XXX:policy/eks-fargate-logging-policy \
--role-name eksctl-perp-staging-cluste-FargatePodExecutionRole-XXX

Configure Kubernetes to send container logs on Fargate nodes to CloudWatch via Fluent Bit.

kind: Namespace
apiVersion: v1
metadata:
  name: aws-observability
  labels:
    aws-observability: enabled
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: aws-logging
  namespace: aws-observability
data:
  output.conf: |
    [OUTPUT]
        Name cloudwatch_logs
        Match *
        region ap-northeast-1
        log_group_name /aws/eks/perp-staging/containers
        log_stream_prefix fluent-bit-
        auto_create_group On
kubectl apply -f aws-logs-fargate.yaml

ref:
https://docs.aws.amazon.com/eks/latest/userguide/fargate-logging.html
https://docs.fluentbit.io/manual/pipeline/outputs/cloudwatch

Setup Container Logging for EC2 Nodes (CloudWatch Container Insights)

Deploy Fluent Bit as DaemonSet to send container logs to CloudWatch Logs.

ClusterName=perp-staging
RegionName=ap-northeast-1
FluentBitHttpPort='2020'
FluentBitReadFromHead='Off'
[[ ${FluentBitReadFromHead} = 'On' ]] && FluentBitReadFromTail='Off'|| FluentBitReadFromTail='On'
[[ -z ${FluentBitHttpPort} ]] && FluentBitHttpServer='Off' || FluentBitHttpServer='On'
curl -s https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/quickstart/cwagent-fluent-bit-quickstart.yaml | sed 's/{{cluster_name}}/'${ClusterName}'/;s/{{region_name}}/'${RegionName}'/;s/{{http_server_toggle}}/"'${FluentBitHttpServer}'"/;s/{{http_server_port}}/"'${FluentBitHttpPort}'"/;s/{{read_from_head}}/"'${FluentBitReadFromHead}'"/;s/{{read_from_tail}}/"'${FluentBitReadFromTail}'"/' > aws-logs-ec2.yaml
kubectl apply -f aws-logs-ec2.yaml

ref:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-setup-EKS-quickstart.html

Fix "No Access-Control-Allow-Origin header" for S3 and CloudFront

Fix "No Access-Control-Allow-Origin header" for S3 and CloudFront

To avoid the error "No 'Access-Control-Allow-Origin' header is present on the requested resource":

  • Enable CORS on your S3 bucket
  • Forward the appropriate headers on your CloudFront distribution

Enable CORS on S3 Bucket

In S3 -> [your bucket] -> Permissions -> Cross-origin resource sharing (CORS):

[
    {
        "AllowedHeaders": [
            "*"
        ],
        "AllowedMethods": [
            "GET"
        ],
        "AllowedOrigins": [
            "*"
        ],
        "ExposeHeaders": []
    }
]

ref:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/cors.html
https://docs.aws.amazon.com/AmazonS3/latest/userguide/ManageCorsUsing.html

Configure Behaviors on CloudFront Distribution

In CloudFront -> [your distribution] -> Behaviors -> Create Behavior:

  • Path Pattern: *
  • Allowed HTTP Methods: GET, HEAD, OPTIONS
  • Cached HTTP Methods: +OPTIONS
  • Origin Request Policy: Managed-CORS-S3Origin
    • This policy actually whitelists the following headers:
      • Access-Control-Request-Headers
      • Access-Control-Request-Method
      • Origin

ref:
https://aws.amazon.com/premiumsupport/knowledge-center/no-access-control-allow-origin-error/

Validate it's working:

fetch("https://metadata.perp.exchange/config.production.json")
.then((res) => res.json())
.then((out) => { console.log(out) })
.catch((err) => { throw err });