Deploy Ethereum RPC Provider Load Balancer with HAProxy in Kubernetes (AWS EKS)

Deploy Ethereum RPC Provider Load Balancer with HAProxy in Kubernetes (AWS EKS)

To achieve high availability and better performance, we could build a HAProxy load balancer in front of multiple Ethereum RPC providers, and also automatically adjust traffic weights based on the latency and block timestamp of each RPC endpoints.

ref:
https://www.haproxy.org/

Configurations

In haproxy.cfg, we have a backend named rpc-backend, and two RPC endpoints: quicknode and alchemy as upstream servers.

global
    log stdout format raw local0 info
    stats socket ipv4@*:9999 level admin expose-fd listeners
    stats timeout 5s

defaults
    log global
    mode http
    option httplog
    option dontlognull
    timeout connect 10s
    timeout client 60s
    timeout server 60s
    timeout http-request 60s

frontend stats
    bind *:8404
    stats enable
    stats uri /
    stats refresh 10s

frontend http
    bind *:8000
    option forwardfor
    default_backend rpc-backend

backend rpc-backend
    balance leastconn
    server quicknode 127.0.0.1:8001 weight 100
    server alchemy 127.0.0.1:8002 weight 100

frontend quicknode-frontend
    bind *:8001
    option dontlog-normal
    default_backend quicknode-backend

backend quicknode-backend
    balance roundrobin
    http-request set-header Host xxx.quiknode.pro
    http-request set-path /xxx
    server quicknode xxx.quiknode.pro:443 sni str(xxx.quiknode.pro) check-ssl ssl verify none

frontend alchemy-frontend
    bind *:8002
    option dontlog-normal
    default_backend alchemy-backend

backend alchemy-backend
    balance roundrobin
    http-request set-header Host xxx.alchemy.com
    http-request set-path /xxx
    server alchemy xxx.alchemy.com:443 sni str(xxx.alchemy.com) check-ssl ssl verify none

ref:
https://docs.haproxy.org/2.7/configuration.html
https://www.haproxy.com/documentation/hapee/latest/configuration/

Test it on local:

docker run --rm -v $PWD:/usr/local/etc/haproxy \
-p 8000:8000 \
-p 8404:8404 \
-p 9999:9999 \
-i -t --name haproxy haproxy:2.7.0

docker exec -i -t -u 0 haproxy bash

echo "show stat" | socat stdio TCP:127.0.0.1:9999
echo "set weight rpc-backend/quicknode 0" | socat stdio TCP:127.0.0.1:9999

# if you're using a socket file descriptor
apt update
apt install socat -y
echo "set weight rpc-backend/alchemy 0" | socat stdio /var/lib/haproxy/haproxy.sock

ref:
https://www.redhat.com/sysadmin/getting-started-socat

Healtchcheck

Then the important part: we're going to run a simple but flexible healthcheck script, called node weighter, as a sidecar container. So the healthcheck script can access HAProxy admin socket of the HAProxy container through 127.0.0.1:9999.

The node weighter can be written in any language. Here is a TypeScript example:

in HAProxyConnector.ts which sets weights through HAProxy admin socket:

import net from "net"

export interface ServerWeight {
    backendName: string
    serverName: string
    weight: number
}

export class HAProxyConnector {
    constructor(readonly adminHost = "127.0.0.1", readonly adminPort = 9999) {}

    setWeights(serverWeights: ServerWeight[]) {
        const scaledServerWeights = this.scaleWeights(serverWeights)

        const commands = scaledServerWeights.map(server => {
            return `set weight ${server.backendName}/${server.serverName} ${server.weight}\n`
        })

        const client = net.createConnection({ host: this.adminHost, port: this.adminPort }, () => {
            console.log("HAProxyAdminSocketConnected")
        })
        client.on("error", err => {
            console.log("HAProxyAdminSocketError")
        })
        client.on("data", data => {
            console.log("HAProxyAdminSocketData")
            console.log(data.toString().trim())
        })

        client.write(commands.join(""))
    }

    private scaleWeights(serverWeights: ServerWeight[]) {
        const totalWeight = sum(serverWeights.map(server => server.weight))

        return serverWeights.map(server => {
            server.weight = Math.floor((server.weight / totalWeight) * 256)
            return server
        })
    }
}

in RPCProxyWeighter.ts which calculates weights based a custom healthcheck logic:

import { HAProxyConnector } from "./connectors/HAProxyConnector"
import config from "./config.json"

export interface Server {
    backendName: string
    serverName: string
    serverUrl: string
}

export interface ServerWithWeight {
    backendName: string
    serverName: string
    weight: number
    [metadata: string]: any
}

export class RPCProxyWeighter {
    protected readonly log = Log.getLogger(RPCProxyWeighter.name)
    protected readonly connector: HAProxyConnector

    protected readonly ADJUST_INTERVAL_SEC = 60 // 60 seconds
    protected readonly MAX_BLOCK_TIMESTAMP_DELAY_MSEC = 150 * 1000 // 150 seconds
    protected readonly MAX_LATENCY_MSEC = 3 * 1000 // 3 seconds
    protected shouldScale = false
    protected totalWeight = 0

    constructor() {
        this.connector = new HAProxyConnector(config.admin.host, config.admin.port)
    }

    async start() {
        while (true) {
            let serverWithWeights = await this.calculateWeights(config.servers)
            if (this.shouldScale) {
                serverWithWeights = this.connector.scaleWeights(serverWithWeights)
            }
            this.connector.setWeights(serverWithWeights)

            await sleep(1000 * this.ADJUST_INTERVAL_SEC)
        }
    }

    async calculateWeights(servers: Server[]) {
        this.totalWeight = 0

        const serverWithWeights = await Promise.all(
            servers.map(async server => {
                try {
                    return await this.calculateWeight(server)
                } catch (err: any) {
                    return {
                        backendName: server.backendName,
                        serverName: server.serverName,
                        weight: 0,
                    }
                }
            }),
        )

        // if all endpoints are unhealthy, overwrite weights to 100
        if (this.totalWeight === 0) {
            for (const server of serverWithWeights) {
                server.weight = 100
            }
        }

        return serverWithWeights
    }

    async calculateWeight(server: Server) {
        const healthInfo = await this.getHealthInfo(server.serverUrl)

        const serverWithWeight: ServerWithWeight = {
            ...{
                backendName: server.backendName,
                serverName: server.serverName,
                weight: 0,
            },
            ...healthInfo,
        }

        if (healthInfo.isBlockTooOld || healthInfo.isLatencyTooHigh) {
            return serverWithWeight
        }

        // normalizedLatency: the lower the better
        // blockTimestampDelayMsec: the lower the better
        // both units are milliseconds at the same scale
        // serverWithWeight.weight = 1 / healthInfo.normalizedLatency + 1 / healthInfo.blockTimestampDelayMsec

        // NOTE: if we're using `balance source` in HAProxy, the weight can only be 100% or 0%,
        // therefore, as long as the RPC endpoint is healthy, we always set the same weight
        serverWithWeight.weight = 100

        this.totalWeight += serverWithWeight.weight

        return serverWithWeight
    }

    protected async getHealthInfo(serverUrl: string): Promise<HealthInfo> {
        const provider = new ethers.providers.StaticJsonRpcProvider(serverUrl)

        // TODO: add timeout
        const start = Date.now()
        const blockNumber = await provider.getBlockNumber()
        const end = Date.now()

        const block = await provider.getBlock(blockNumber)

        const blockTimestamp = block.timestamp
        const blockTimestampDelayMsec = Math.floor(Date.now() / 1000 - blockTimestamp) * 1000
        const isBlockTooOld = blockTimestampDelayMsec >= this.MAX_BLOCK_TIMESTAMP_DELAY_MSEC

        const latency = end - start
        const normalizedLatency = this.normalizeLatency(latency)
        const isLatencyTooHigh = latency >= this.MAX_LATENCY_MSEC

        return {
            blockNumber,
            blockTimestamp,
            blockTimestampDelayMsec,
            isBlockTooOld,
            latency,
            normalizedLatency,
            isLatencyTooHigh,
        }
    }

    protected normalizeLatency(latency: number) {
        if (latency <= 40) {
            return 1
        }

        const digits = Math.floor(latency).toString().length
        const base = Math.pow(10, digits - 1)
        return Math.floor(latency / base) * base
    }
}

in config.json:

Technically, we don't need this config file. Instead, we could read the actual URLs from HAProxy admin socket directly. Though creating a JSON file that contains URLs is much simpler.

{
    "admin": {
        "host": "127.0.0.1",
        "port": 9999
    },
    "servers": [
        {
            "backendName": "rpc-backend",
            "serverName": "quicknode",
            "serverUrl": "https://xxx.quiknode.pro/xxx"
        },
        {
            "backendName": "rpc-backend",
            "serverName": "alchemy",
            "serverUrl": "https://xxx.alchemy.com/xxx"
        }
    ]
}

ref:
https://www.haproxy.com/documentation/hapee/latest/api/runtime-api/set-weight/
https://sleeplessbeastie.eu/2020/01/29/how-to-use-haproxy-stats-socket/

Deployments

apiVersion: v1
kind: ConfigMap
metadata:
  name: rpc-proxy-config-file
data:
  haproxy.cfg: |
    ...
  config.json: |
    ...
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: rpc-proxy
spec:
  replicas: 2
  selector:
    matchLabels:
      app: rpc-proxy
  template:
    metadata:
      labels:
        app: rpc-proxy
    spec:
      volumes:
        - name: rpc-proxy-config-file
          configMap:
            name: rpc-proxy-config-file
      containers:
        - name: haproxy
          image: haproxy:2.7.0
          ports:
            - containerPort: 8000
              protocol: TCP
          resources:
            requests:
              cpu: 200m
              memory: 256Mi
            limits:
              cpu: 1000m
              memory: 256Mi
          volumeMounts:
            - name: rpc-proxy-config-file
              subPath: haproxy.cfg
              mountPath: /usr/local/etc/haproxy/haproxy.cfg
              readOnly: true
        - name: node-weighter
          image: your-node-weighter
          command: ["node", "./index.js"]
          resources:
            requests:
              cpu: 200m
              memory: 256Mi
            limits:
              cpu: 1000m
              memory: 256Mi
          volumeMounts:
            - name: rpc-proxy-config-file
              subPath: config.json
              mountPath: /path/to/build/config.json
              readOnly: true
---
apiVersion: v1
kind: Service
metadata:
  name: rpc-proxy
spec:
  clusterIP: None
  selector:
    app: rpc-proxy
  ports:
    - name: http
      port: 8000
      targetPort: 8000

The RPC load balancer can then be accessed through http://rpc-proxy.default.svc.cluster.local:8000 inside the Kubernetes cluster.

ref:
https://www.containiq.com/post/kubernetes-sidecar-container
https://hub.docker.com/_/haproxy

Amazon EKS: Setup Cluster Autoscaler

Amazon EKS: Setup Cluster Autoscaler

The Kubernetes's Cluster Autoscaler automatically adjusts the number of nodes in your cluster when pods fail or are rescheduled onto other nodes. Here we're going to deploy the AWS implementation that implements the decisions of Cluster Autoscaler by communicating with AWS products and services such as Amazon EC2.

ref:
https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html

Configure IAM Permissions

If your existing node groups were created with eksctl create nodegroup --asg-access, then this policy already exists and you can skip this step.

in cluster-autoscaler-policy-staging.json

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "autoscaling:SetDesiredCapacity",
                "autoscaling:TerminateInstanceInAutoScalingGroup"
            ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "aws:ResourceTag/k8s.io/cluster-autoscaler/perp-staging": "owned"
                }
            }
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "autoscaling:DescribeAutoScalingGroups",
                "autoscaling:DescribeAutoScalingInstances",
                "autoscaling:DescribeLaunchConfigurations",
                "autoscaling:DescribeTags",
                "autoscaling:SetDesiredCapacity",
                "autoscaling:TerminateInstanceInAutoScalingGroup",
                "ec2:DescribeLaunchTemplateVersions"
            ],
            "Resource": "*"
        }
    ]
}
aws --profile perp iam create-policy \
  --policy-name AmazonEKSClusterAutoscalerPolicyStaging \
  --policy-document file://cluster-autoscaler-policy-staging.json

eksctl --profile=perp create iamserviceaccount \
  --cluster=perp-staging \
  --namespace=kube-system \
  --name=cluster-autoscaler \
  --attach-policy-arn=arn:aws:iam::xxx:policy/AmazonEKSClusterAutoscalerPolicy \
  --override-existing-serviceaccounts \
  --approve

ref:
https://docs.aws.amazon.com/eks/latest/userguide/autoscaling.html

Deploy Cluster Autoscaler

Download the deployment yaml of cluster-autoscaler:

curl -o cluster-autoscaler-autodiscover.yaml \
https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml

Before you apply the file, it's recommended to check whether the version of cluster-autoscaler matches the Kubernetes major and minor version of your cluster. Find the version number on GitHub releases.

kubectl apply -f cluster-autoscaler-autodiscover.yaml
# or
kubectl set image deployment cluster-autoscaler \
  -n kube-system \
  cluster-autoscaler=k8s.gcr.io/autoscaling/cluster-autoscaler:v1.21.3

Do some tweaks:

  • --balance-similar-node-groups ensures that there is enough available compute across all availability zones.
  • --skip-nodes-with-system-pods=false ensures that there are no problems with scaling to zero.
kubectl patch deployment cluster-autoscaler \
  -n kube-system \
  -p '{"spec":{"template":{"metadata":{"annotations":{"cluster-autoscaler.kubernetes.io/safe-to-evict": "false"}}}}}'

kubectl -n kube-system edit deployment.apps/cluster-autoscaler
# change the command to the following:
# spec:
#   containers:
#   - command
#     - ./cluster-autoscaler
#     - --v=4
#     - --stderrthreshold=info
#     - --cloud-provider=aws
#     - --skip-nodes-with-local-storage=false
#     - --expander=least-waste
#     - --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/perp-staging
#     - --balance-similar-node-groups
#     - --skip-nodes-with-system-pods=false

ref:
https://aws.github.io/aws-eks-best-practices/cluster-autoscaling/
https://github.com/kubernetes/autoscaler/tree/master/charts/cluster-autoscaler#additional-configuration

Amazon EKS: Setup aws-load-balancer-controller for Kubernetes Ingress

Amazon EKS: Setup aws-load-balancer-controller for Kubernetes Ingress

AWS Load Balancer Controller replaces the functionality of the AWS ALB Ingress Controller.

Install aws-load-balancer-controller

Create an IAM OIDC provider for your cluster

eksctl utils associate-iam-oidc-provider --profile=perp \
  --region ap-northeast-1 \
  --cluster perp-staging \
  --approve

ref:
https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html

Create a Kubernetes ServiceAccount for the Controller

curl -o iam_policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.3.0/docs/install/iam_policy.json

aws iam create-policy --profile=perp \
  --policy-name AWSLoadBalancerControllerIAMPolicy \
  --policy-document file://iam_policy.json

eksctl create iamserviceaccount --profile=perp \
  --cluster=perp-staging \
  --namespace=kube-system \
  --name=aws-load-balancer-controller \
  --attach-policy-arn=arn:aws:iam::XXX:policy/AWSLoadBalancerControllerIAMPolicy \
  --override-existing-serviceaccounts \
  --approve

re:
https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/deploy/installation/

Deploy aws-load-balancer-controller

helm repo add eks https://aws.github.io/eks-charts

helm ls -A

helm upgrade -i \
  aws-load-balancer-controller eks/aws-load-balancer-controller \
  -n kube-system \
  --set clusterName=perp-staging \
  --set serviceAccount.create=false \
  --set serviceAccount.name=aws-load-balancer-controller \
  --set region=ap-northeast-1 \
  --set vpcId=vpc-XXX

ref:
https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html#helm-v3-or-later
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/deploy/installation/
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/deploy/configurations/
https://github.com/aws/eks-charts/tree/master/stable/aws-load-balancer-controller

Create an Ingress using ALB

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: graph-node-ingress
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-FS-1-2-Res-2020-10
    alb.ingress.kubernetes.io/certificate-arn: YOUR_ACM_ARN
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=60
    alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=3600,deletion_protection.enabled=true
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]'
    alb.ingress.kubernetes.io/actions.graph-node-jsonrpc: >
      {"type": "forward", "forwardConfig": {"targetGroups": [
        {"serviceName": "graph-node", "servicePort": 8000, "weight": 100}
      ]}}
    alb.ingress.kubernetes.io/actions.graph-node-websocket: >
      {"type": "forward", "forwardConfig": {"targetGroups": [
        {"serviceName": "graph-node", "servicePort": 8001, "weight": 100}
      ]}}
spec:
  rules:
  - host: "subgraph-api.example.com"
    http:
      paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: graph-node-jsonrpc
              port:
                name: use-annotation
  - host: "subgraph-ws.example.com"
    http:
      paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: graph-node-websocket
              port:
                name: use-annotation

ref:
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/guide/ingress/annotations/

kubectl apply -f graph-node-ingress.yaml -R

Then create Route 53 records for the above domains, and point them to the newly created ALB.

Amazon EKS: Setup kubernetes-external-secrets with AWS Secret Manager

Amazon EKS: Setup kubernetes-external-secrets with AWS Secret Manager

kubernetes-external-secrets allows you to use external secret management systems, like AWS Secrets Manager, to securely add Secrets in Kubernetes, so Pods can access Secrets normally.

ref:
https://github.com/external-secrets/kubernetes-external-secrets

AWS Secret Manager

For instance, we create a secret named YOUR_SECRET on AWS Secret Manager in the same region as our EKS cluster, using DefaultEncryptionKey as the encryption key. The content of the secret entity look like:

{
  "KEY_1": "VALUE_1",
  "KEY_2": "VALUE_2",
}

We can retrieve the secret value:

aws secretsmanager get-secret-value --profile=perp \
--region ap-northeast-1 \
--secret-id YOUR_SECRET

kubernetes-external-secrets

For kubernetes-external-secrets to work properly, it must be granted access to AWS Secrets Manager. To achieve that, we need to create an IAM role for kubernetes-external-secrets' service account.

ref:
https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html

Configure Secrets Backends

Create an IAM OIDC provider for the cluster:

eksctl utils associate-iam-oidc-provider --profile=perp \
--region ap-northeast-1 \
--cluster perp-staging \
--approve

aws iam list-open-id-connect-providers --profile=perp

ref:
https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html

Create an IAM policy that allows the role to access all secrets we created on AWS Secret Manager:

AWS_ACCOUNT_ID=$(aws sts get-caller-identity --profile=perp --query "Account" --output text)

cat <<EOF > policy.json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "secretsmanager:GetResourcePolicy",
        "secretsmanager:GetSecretValue",
        "secretsmanager:DescribeSecret",
        "secretsmanager:ListSecretVersionIds"
      ],
      "Resource": [
        "arn:aws:secretsmanager:ap-northeast-1:${AWS_ACCOUNT_ID}:secret:*"
      ]
    }
  ]
}
EOF

aws iam create-policy --profile=perp \
--policy-name perp-staging-secrets-policy --policy-document file://policy.json

Attach the above IAM policy to an IAM role, and define AssumeRole for the service account external-secrets-kubernetes-external-secrets which will be created later:

AWS_ACCOUNT_ID=$(aws sts get-caller-identity --profile=perp --query "Account" --output text)
OIDC_PROVIDER=$(aws eks describe-cluster --profile=perp --name perp-staging --region ap-northeast-1 --query "cluster.identity.oidc.issuer" --output text | sed -e "s/^https:\/\///") 

cat <<EOF > trust.json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "${OIDC_PROVIDER}:aud": "sts.amazonaws.com",
          "${OIDC_PROVIDER}:sub": "system:serviceaccount:default:external-secrets-kubernetes-external-secrets"
        }
      }
    }
  ]
}
EOF

aws iam create-role --profile=perp \
--role-name perp-staging-secrets-role \
--assume-role-policy-document file://trust.json

aws iam attach-role-policy --profile=perp \
--role-name perp-staging-secrets-role \
--policy-arn YOUR_POLICY_ARN

ref:
https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html
https://gist.github.com/lukaszbudnik/f1f42bd5a57430e3c25034200ba44c2e

Deploy kubernetes-external-secrets Controller

helm repo add external-secrets https://external-secrets.github.io/kubernetes-external-secrets/

helm install external-secrets \
external-secrets/kubernetes-external-secrets \
--skip-crds \
--set env.AWS_REGION=ap-northeast-1 \
--set securityContext.fsGroup=65534 \
--set serviceAccount.annotations."eks\.amazonaws\.com/role-arn"='YOUR_ROLE_ARN'

helm list

It would automatically create a service account named external-secrets-kubernetes-external-secrets in Kubernetes.

ref:
https://github.com/external-secrets/kubernetes-external-secrets/tree/master/charts/kubernetes-external-secrets

Deploy ExternalSecret

ExternalSecret app-secrets will generate a Secret object with the same name, and the content would look like:

apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
  name: example-secret
spec:
  backendType: secretsManager
  region: ap-northeast-1
  dataFrom:
    - YOUR_SECRET
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: example-app
  template:
    metadata:
      labels:
        app: example-app
    spec:
      containers:
      - name: example-app
        image: busybox:latest
        envFrom:
        - secretRef:
            name: example-secret
kubectl get secret example-secret -o jsonpath="{.data.KEY_1}" | base64 --decode

ref:
https://gist.github.com/lukaszbudnik/f1f42bd5a57430e3c25034200ba44c2e

Amazon EKS: Create a Kubernetes cluster via ClusterConfig

Amazon EKS: Create a Kubernetes cluster via ClusterConfig

Amazon Elastic Kubernetes Service (Amazon EKS) is a managed Kubernetes as a Service on AWS. IMAO, Google Cloud's GKE is still the best choice of managed Kubernetes service if you're not stuck in AWS.

ref:
https://docs.aws.amazon.com/eks/latest/userguide/eks-ug.pdf
https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html

Also see:
https://vinta.ws/code/the-complete-guide-to-google-kubernetes-engine-gke.html

Installation

We need to install some command-line tools: aws, eksctl and kubectl.

brew tap weaveworks/tap
brew install awscli weaveworks/tap/eksctl kubernetes-cli

k9s and fubectl are also recommended which provides fancy terminal UIs to interact with your Kubernetes clusters.

brew install k9s

curl -LO https://rawgit.com/kubermatic/fubectl/master/fubectl.source
source <path-to>/fubectl.source

ref:
https://github.com/derailed/k9s
https://github.com/kubermatic/fubectl

Create Cluster

We use a ClusterConfig to define our cluster.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: perp-staging
  region: ap-northeast-1
# All workloads in the "fargate" Kubernetes namespace will be scheduled onto Fargate
fargateProfiles:
  - name: fp-default
    selectors:
      - namespace: fargate
# https://eksctl.io/usage/schema/
managedNodeGroups:
  - name: managed-ng-m5-4xlarge
    instanceType: m5.4xlarge
    instancePrefix: m5-4xlarge
    minSize: 1
    maxSize: 5
    desiredCapacity: 3
    volumeSize: 100
    iam:
      withAddonPolicies:
        cloudWatch: true
        albIngress: true
        ebs: true
        efs: true
# Enable envelope encryption for Kubernetes Secrets
secretsEncryption:
  keyARN: "arn:aws:kms:YOUR_KMS_ARN"
# Enable CloudWatch logging for Kubernetes components
cloudWatch:
  clusterLogging:
    enableTypes: ["*"]

Create the cluster.

eksctl create cluster --profile=perp -f clusterconfig.yaml

ref:
https://eksctl.io/usage/creating-and-managing-clusters/
https://github.com/weaveworks/eksctl/tree/master/examples

We can also use the same config file to update our cluster, but not all configurations are supported currently.

eksctl upgrade cluster --profile=perp -f clusterconfig.yaml

Access Cluster

aws eks --profile=perp update-kubeconfig \
--region ap-northeast-1 \
--name perp-staging \
--alias vinta@perp-staging

ref:
https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html

Delete Cluster

# You might need to manually delete/detach following resources first:
# Detach non-default policies for FargatePodExecutionRole and NodeInstanceRole
# Fargate Profile
# EC2 Network Interfaces 
# EC2 ALB
eksctl delete cluster --profile=perp \
--region ap-northeast-1 \
--name perp-staging

Then you can delete the CloudFormation stack on AWS Management Console.

Cluster Authentication

kubectl get configmap aws-auth -n kube-system -o yaml

We must copy mapRoles from the above ConfigMap, and add the mapUsers section:

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  # NOTE: mapRoles are copied from "kubectl get configmap aws-auth -n kube-system -o yaml"
  mapRoles: |
    - rolearn: YOUR_ARN_FargatePodExecutionRole
      username: system:node:{{SessionName}}
      groups:
      - system:bootstrappers
      - system:nodes
      - system:node-proxier
    - rolearn: YOUR_ARN_NodeInstanceRole
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes
  # Only IAM users listed here can access this cluster
  mapUsers: |
    - userarn: YOUR_USER_ARN
      username: YOUR_AWS_USERNAME
      groups:
        - system:masters
kubectl apply -f aws-auth.yaml
kubectl describe configmap -n kube-system aws-auth

ref:
https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html
https://aws.amazon.com/premiumsupport/knowledge-center/amazon-eks-cluster-access/

Setup Container Logging for Fargate Nodes

Create an IAM policy and attach the IAM policy to the pod execution role specified for your Fargate profile. The --role-name should be the name of FargatePodExecutionRole, you can find it under "Resources" tab in the CloudFormation stack of your EKS cluster.

curl -so permissions.json https://raw.githubusercontent.com/aws-samples/amazon-eks-fluent-logging-examples/mainline/examples/fargate/cloudwatchlogs/permissions.json

aws iam create-policy --profile=perp \
--policy-name eks-fargate-logging-policy \
--policy-document file://permissions.json

aws iam attach-role-policy --profile=perp \
--policy-arn arn:aws:iam::XXX:policy/eks-fargate-logging-policy \
--role-name eksctl-perp-staging-cluste-FargatePodExecutionRole-XXX

Configure Kubernetes to send container logs on Fargate nodes to CloudWatch via Fluent Bit.

kind: Namespace
apiVersion: v1
metadata:
  name: aws-observability
  labels:
    aws-observability: enabled
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: aws-logging
  namespace: aws-observability
data:
  output.conf: |
    [OUTPUT]
        Name cloudwatch_logs
        Match *
        region ap-northeast-1
        log_group_name /aws/eks/perp-staging/containers
        log_stream_prefix fluent-bit-
        auto_create_group On
kubectl apply -f aws-logs-fargate.yaml

ref:
https://docs.aws.amazon.com/eks/latest/userguide/fargate-logging.html
https://docs.fluentbit.io/manual/pipeline/outputs/cloudwatch

Setup Container Logging for EC2 Nodes (CloudWatch Container Insights)

Deploy Fluent Bit as DaemonSet to send container logs to CloudWatch Logs.

ClusterName=perp-staging
RegionName=ap-northeast-1
FluentBitHttpPort='2020'
FluentBitReadFromHead='Off'
[[ ${FluentBitReadFromHead} = 'On' ]] && FluentBitReadFromTail='Off'|| FluentBitReadFromTail='On'
[[ -z ${FluentBitHttpPort} ]] && FluentBitHttpServer='Off' || FluentBitHttpServer='On'
curl -s https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/quickstart/cwagent-fluent-bit-quickstart.yaml | sed 's/{{cluster_name}}/'${ClusterName}'/;s/{{region_name}}/'${RegionName}'/;s/{{http_server_toggle}}/"'${FluentBitHttpServer}'"/;s/{{http_server_port}}/"'${FluentBitHttpPort}'"/;s/{{read_from_head}}/"'${FluentBitReadFromHead}'"/;s/{{read_from_tail}}/"'${FluentBitReadFromTail}'"/' > aws-logs-ec2.yaml
kubectl apply -f aws-logs-ec2.yaml

ref:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-setup-EKS-quickstart.html