GKE Autopilot Cluster: Pay for Pods, Not Nodes

GKE Autopilot Cluster: Pay for Pods, Not Nodes

If you're already on Google Cloud, it's highly recommended to use GKE Autopilot Cluster: you only pay for resources requested by your pods (system pods and unused resources are free in Autopilot Cluster). No need to pay for surplus node pools anymore! Plus the entire cluster applies Google's best practices by default.

ref:
https://cloud.google.com/kubernetes-engine/pricing#compute

Create an Autopilot Cluster

DO NOT enable Private Nodes, otherwise you MUST pay for a Cloud NAT Gateway (~$32/month) for them to access the internet (to pull images, etc.).

# create
gcloud container clusters create-auto my-auto-cluster \
--project YOUR_PROJECT_ID \
--region us-west1

# connect
gcloud container clusters get-credentials my-auto-cluster \
--project YOUR_PROJECT_ID \
--region us-west1

You can update some configurations later on Google Cloud Console.

ref:
https://docs.cloud.google.com/sdk/gcloud/reference/container/clusters/create-auto

Autopilot mode works in both Autopilot and Standard clusters. You don't necessarily need to create a new Autopilot cluster; you can simply deploy your pods in Autopilot mode as long as your Standard cluster meets the requirements:

gcloud container clusters check-autopilot-compatibility my-cluster \
--project YOUR_PROJECT_ID \
--region us-west1

ref:
https://cloud.google.com/kubernetes-engine/docs/concepts/about-autopilot-mode-standard-clusters
https://cloud.google.com/kubernetes-engine/docs/how-to/autopilot-classes-standard-clusters

Deploy Workloads in Autopilot Mode

The only thing you need to do is add one magical config: nodeSelector: cloud.google.com/compute-class: "autopilot". That's it. You don't need to create or manage any node pools beforehand, just write some YAMLs and kubectl apply. All workloads with cloud.google.com/compute-class: "autopilot" will run in Autopilot mode.

More importantly, you are only billed for the CPU/memory resources your pods request, not for nodes that may have unused capacity or system pods (those running under the kube-system namespace). Autopilot mode is both cost-efficient and developer-friendly.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      nodeSelector:
        cloud.google.com/compute-class: "autopilot"
      containers:
        - name: nginx
          image: nginx:1.29.3
          ports:
            - name: http
              containerPort: 80
          resources:
            requests:
              cpu: 100m
              memory: 128Mi
            limits:
              cpu: 500m
              memory: 256Mi

If your workloads are fault-tolerant (stateless), you can use Spot instances to save a significant amount of money. Just change the nodeSelector to autopilot-spot:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  template:
    spec:
      nodeSelector:
        cloud.google.com/compute-class: "autopilot-spot"
      terminationGracePeriodSeconds: 25 # spot instances have a 25s warning before preemption

ref:
https://docs.cloud.google.com/kubernetes-engine/docs/how-to/autopilot-classes-standard-clusters

You will see something like this in your Autopilot cluster:

kubectl get nodes
NAME                             STATUS   ROLES    AGE     VERSION
gk3-my-auto-cluster-nap-xxx      Ready    <none>   2d18h   v1.33.5-gke.1201000
gk3-my-auto-cluster-nap-xxx      Ready    <none>   1d13h   v1.33.5-gke.1201000
gk3-my-auto-cluster-pool-1-xxx   Ready    <none>   86m     v1.33.5-gke.1201000

The nap nodes are auto-provisioned by Autopilot for your workloads, while pool-1 is a default node pool created during cluster creation. System pods may run on either, but in Autopilot cluster, you are never billed for the nodes themselves (neither nap nor pool-1), nor for the system pods. You only pay for the resources requested by your application pods.

FYI, the minimum resources for Autopilot workloads are:

  • CPU: 50m
  • Memory: 52Mi

Additionally, Autopilot applies the following default resource requests if not specified:

  • Containers in DaemonSets
    • CPU: 50m
    • Memory: 100Mi
    • Ephemeral storage: 100Mi
  • All other containers
    • Ephemeral storage: 1Gi

ref:
https://docs.cloud.google.com/kubernetes-engine/docs/concepts/autopilot-resource-requests

Exclude Prometheus Metrics

You may see Prometheus Samples Ingested in your billing. If you don't need (or don't care about) Prometheus metrics for observability, you could exclude them:

  • Go to Google Cloud Console -> Monitoring -> Metrics Management -> Excluded Metrics -> Metrics Exclusion
  • If you want to exclude all:
    • prometheus.googleapis.com/.*
  • If you only want to exclude some:
    • prometheus.googleapis.com/container_.*
    • prometheus.googleapis.com/kubelet_.*

It's worth noting that excluding Prometheus metrics won't affect your HorizontalPodAutoscaler (HPA) which is using Metrics Server instead.

ref:
https://console.cloud.google.com/monitoring/metrics-management/excluded-metrics
https://docs.cloud.google.com/stackdriver/docs/managed-prometheus/cost-controls

Stop Paying for Kubernetes Load Balancers: Use Cloudflare Tunnel Instead

Stop Paying for Kubernetes Load Balancers: Use Cloudflare Tunnel Instead

To expose services in a Kubernetes cluster, you typically need an Ingress backed by a cloud provider's load balancer, and often a NAT Gateway. For small projects, these costs add up fast (though someone may argue small projects shouldn't use Kubernetes at all).

What if you could ditch the Ingress, Load Balancer, and Public IP entirely? Enter Cloudflare Tunnel (by the way, it costs $0).

How Cloudflare Tunnel Works

Cloudflare Tunnel relies on a lightweight daemon called cloudflared that runs within your cluster to establish secure, persistent outbound connections to Cloudflare's global network (edge servers). Instead of opening inbound firewall ports or configuring public IP addresses, cloudflared initiates traffic from inside your cluster to the Cloudflare edge servers. This outbound-only model creates a bidirectional tunnel that allows Cloudflare to route requests to your private services while blocking all direct inbound access to your origin servers.

So basically Cloudflare Tunnel acts as a reverse proxy that routes traffic from Cloudflare edge servers to your private services: Internet -> Cloudflare Edge Server -> Tunnel -> cloudflared -> Service -> Pod.

ref:
https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/

Create a Tunnel

A tunnel links your origin to Cloudflare's global network. It is a logical connection that enables secure, persistent outbound connections to Cloudflare's global network (Cloudflare Edge Servers).

  • Go to https://one.dash.cloudflare.com/ -> Networks -> Connectors -> Create a tunnel -> Select cloudflared
  • Tunnel name: your-tunnel-name
  • Choose an operating system: Docker

Instead of running any installation command, simply copy the token (starts with eyJ...). We will use it later.

ref:
https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/deployment-guides/kubernetes/

Configure Published Application Routes

First of all, make sure you host your domains on Cloudflare, so the following setup can update your domain's DNS records automatically.

Assume you have the following Services in your Kubernetes cluster:

apiVersion: v1
kind: Service
metadata:
  name: my-blog
spec:
  selector:
    app: my-blog
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: http
---
apiVersion: v1
kind: Service
metadata:
  name: frontend
spec:
  selector:
    app: frontend
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: http

You need to configure your published application routes based on your Services, for instance:

  • Route 1:
    • Domain: example.com
    • Path: blog
    • Type: HTTP
    • URL: my-blog.default:80 => format: your-service.your-namespace:your-service-port
  • Route 2:
    • Domain: example.com
    • Path: (leave it blank)
    • Type: HTTP
    • URL: frontend.default:80 => format: your-service.your-namespace:your-service-port

Deploy cloudflared to Kubernetes

We will deploy cloudflared as a Deployment in Kubernetes. It acts as a connector that routes traffic from Cloudflare's global network directly to your private services. You don't need to expose any of your services to the public Internet.

apiVersion: v1
kind: Secret
metadata:
  name: cloudflared-tunnel-token
stringData:
  token: YOUR_TUNNEL_TOKEN
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tunnel
spec:
  replicas: 3
  selector:
    matchLabels:
      app: tunnel
  template:
    metadata:
      labels:
        app: tunnel
    spec:
      terminationGracePeriodSeconds: 25
      nodeSelector:
        cloud.google.com/compute-class: "autopilot-spot"
      securityContext:
        sysctls:
          # Allows ICMP traffic (ping, traceroute) to resources behind cloudflared
          - name: net.ipv4.ping_group_range
            value: "65532 65532"
      containers:
        - name: cloudflared
          image: cloudflare/cloudflared:latest
          command:
            - cloudflared
            - tunnel
            - --no-autoupdate
            - --loglevel
            - debug
            - --metrics
            - 0.0.0.0:2000
            - run
          env:
            - name: TUNNEL_TOKEN
              valueFrom:
                secretKeyRef:
                  name: cloudflared-tunnel-token
                  key: token
          livenessProbe:
            httpGet:
              # Cloudflared has a /ready endpoint which returns 200 if and only if it has an active connection to Cloudflare's network
              path: /ready
              port: 2000
            failureThreshold: 1
            initialDelaySeconds: 10
            periodSeconds: 10
          resources:
            requests:
              cpu: 50m
              memory: 128Mi
            limits:
              cpu: 200m
              memory: 256Mi

ref:
https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/configure-tunnels/cloudflared-parameters/run-parameters/

kubectl apply -f cloudflared/deployment.yml

That's it! Check the Cloudflare dashboard, and you should see your tunnel status as HEALTHY.

You can now safely delete your Ingress and the underlying load balancer. You don't need them anymore. Enjoy your secure, cost-effective cluster!

Deploy Ethereum RPC Provider Load Balancer with HAProxy in Kubernetes (AWS EKS)

Deploy Ethereum RPC Provider Load Balancer with HAProxy in Kubernetes (AWS EKS)

To achieve high availability and better performance, we could build a HAProxy load balancer in front of multiple Ethereum RPC providers, and also automatically adjust traffic weights based on the latency and block timestamp of each RPC endpoints.

ref:
https://www.haproxy.org/

Configurations

In haproxy.cfg, we have a backend named rpc-backend, and two RPC endpoints: quicknode and alchemy as upstream servers.

global
    log stdout format raw local0 info
    stats socket ipv4@*:9999 level admin expose-fd listeners
    stats timeout 5s

defaults
    log global
    mode http
    option httplog
    option dontlognull
    timeout connect 10s
    timeout client 60s
    timeout server 60s
    timeout http-request 60s

frontend stats
    bind *:8404
    stats enable
    stats uri /
    stats refresh 10s

frontend http
    bind *:8000
    option forwardfor
    default_backend rpc-backend

backend rpc-backend
    balance leastconn
    server quicknode 127.0.0.1:8001 weight 100
    server alchemy 127.0.0.1:8002 weight 100

frontend quicknode-frontend
    bind *:8001
    option dontlog-normal
    default_backend quicknode-backend

backend quicknode-backend
    balance roundrobin
    http-request set-header Host xxx.quiknode.pro
    http-request set-path /xxx
    server quicknode xxx.quiknode.pro:443 sni str(xxx.quiknode.pro) check-ssl ssl verify none

frontend alchemy-frontend
    bind *:8002
    option dontlog-normal
    default_backend alchemy-backend

backend alchemy-backend
    balance roundrobin
    http-request set-header Host xxx.alchemy.com
    http-request set-path /xxx
    server alchemy xxx.alchemy.com:443 sni str(xxx.alchemy.com) check-ssl ssl verify none

ref:
https://docs.haproxy.org/2.7/configuration.html
https://www.haproxy.com/documentation/hapee/latest/configuration/

Test it on local:

docker run --rm -v $PWD:/usr/local/etc/haproxy \
-p 8000:8000 \
-p 8404:8404 \
-p 9999:9999 \
-i -t --name haproxy haproxy:2.7.0

docker exec -i -t -u 0 haproxy bash

echo "show stat" | socat stdio TCP:127.0.0.1:9999
echo "set weight rpc-backend/quicknode 0" | socat stdio TCP:127.0.0.1:9999

# if you're using a socket file descriptor
apt update
apt install socat -y
echo "set weight rpc-backend/alchemy 0" | socat stdio /var/lib/haproxy/haproxy.sock

ref:
https://www.redhat.com/sysadmin/getting-started-socat

Healtchcheck

Then the important part: we're going to run a simple but flexible healthcheck script, called node weighter, as a sidecar container. So the healthcheck script can access HAProxy admin socket of the HAProxy container through 127.0.0.1:9999.

The node weighter can be written in any language. Here is a TypeScript example:

in HAProxyConnector.ts which sets weights through HAProxy admin socket:

import net from "net"

export interface ServerWeight {
    backendName: string
    serverName: string
    weight: number
}

export class HAProxyConnector {
    constructor(readonly adminHost = "127.0.0.1", readonly adminPort = 9999) {}

    setWeights(serverWeights: ServerWeight[]) {
        const scaledServerWeights = this.scaleWeights(serverWeights)

        const commands = scaledServerWeights.map(server => {
            return `set weight ${server.backendName}/${server.serverName} ${server.weight}\n`
        })

        const client = net.createConnection({ host: this.adminHost, port: this.adminPort }, () => {
            console.log("HAProxyAdminSocketConnected")
        })
        client.on("error", err => {
            console.log("HAProxyAdminSocketError")
        })
        client.on("data", data => {
            console.log("HAProxyAdminSocketData")
            console.log(data.toString().trim())
        })

        client.write(commands.join(""))
    }

    private scaleWeights(serverWeights: ServerWeight[]) {
        const totalWeight = sum(serverWeights.map(server => server.weight))

        return serverWeights.map(server => {
            server.weight = Math.floor((server.weight / totalWeight) * 256)
            return server
        })
    }
}

in RPCProxyWeighter.ts which calculates weights based a custom healthcheck logic:

import { HAProxyConnector } from "./connectors/HAProxyConnector"
import config from "./config.json"

export interface Server {
    backendName: string
    serverName: string
    serverUrl: string
}

export interface ServerWithWeight {
    backendName: string
    serverName: string
    weight: number
    [metadata: string]: any
}

export class RPCProxyWeighter {
    protected readonly log = Log.getLogger(RPCProxyWeighter.name)
    protected readonly connector: HAProxyConnector

    protected readonly ADJUST_INTERVAL_SEC = 60 // 60 seconds
    protected readonly MAX_BLOCK_TIMESTAMP_DELAY_MSEC = 150 * 1000 // 150 seconds
    protected readonly MAX_LATENCY_MSEC = 3 * 1000 // 3 seconds
    protected shouldScale = false
    protected totalWeight = 0

    constructor() {
        this.connector = new HAProxyConnector(config.admin.host, config.admin.port)
    }

    async start() {
        while (true) {
            let serverWithWeights = await this.calculateWeights(config.servers)
            if (this.shouldScale) {
                serverWithWeights = this.connector.scaleWeights(serverWithWeights)
            }
            this.connector.setWeights(serverWithWeights)

            await sleep(1000 * this.ADJUST_INTERVAL_SEC)
        }
    }

    async calculateWeights(servers: Server[]) {
        this.totalWeight = 0

        const serverWithWeights = await Promise.all(
            servers.map(async server => {
                try {
                    return await this.calculateWeight(server)
                } catch (err: any) {
                    return {
                        backendName: server.backendName,
                        serverName: server.serverName,
                        weight: 0,
                    }
                }
            }),
        )

        // if all endpoints are unhealthy, overwrite weights to 100
        if (this.totalWeight === 0) {
            for (const server of serverWithWeights) {
                server.weight = 100
            }
        }

        return serverWithWeights
    }

    async calculateWeight(server: Server) {
        const healthInfo = await this.getHealthInfo(server.serverUrl)

        const serverWithWeight: ServerWithWeight = {
            ...{
                backendName: server.backendName,
                serverName: server.serverName,
                weight: 0,
            },
            ...healthInfo,
        }

        if (healthInfo.isBlockTooOld || healthInfo.isLatencyTooHigh) {
            return serverWithWeight
        }

        // normalizedLatency: the lower the better
        // blockTimestampDelayMsec: the lower the better
        // both units are milliseconds at the same scale
        // serverWithWeight.weight = 1 / healthInfo.normalizedLatency + 1 / healthInfo.blockTimestampDelayMsec

        // NOTE: if we're using `balance source` in HAProxy, the weight can only be 100% or 0%,
        // therefore, as long as the RPC endpoint is healthy, we always set the same weight
        serverWithWeight.weight = 100

        this.totalWeight += serverWithWeight.weight

        return serverWithWeight
    }

    protected async getHealthInfo(serverUrl: string): Promise<HealthInfo> {
        const provider = new ethers.providers.StaticJsonRpcProvider(serverUrl)

        // TODO: add timeout
        const start = Date.now()
        const blockNumber = await provider.getBlockNumber()
        const end = Date.now()

        const block = await provider.getBlock(blockNumber)

        const blockTimestamp = block.timestamp
        const blockTimestampDelayMsec = Math.floor(Date.now() / 1000 - blockTimestamp) * 1000
        const isBlockTooOld = blockTimestampDelayMsec >= this.MAX_BLOCK_TIMESTAMP_DELAY_MSEC

        const latency = end - start
        const normalizedLatency = this.normalizeLatency(latency)
        const isLatencyTooHigh = latency >= this.MAX_LATENCY_MSEC

        return {
            blockNumber,
            blockTimestamp,
            blockTimestampDelayMsec,
            isBlockTooOld,
            latency,
            normalizedLatency,
            isLatencyTooHigh,
        }
    }

    protected normalizeLatency(latency: number) {
        if (latency <= 40) {
            return 1
        }

        const digits = Math.floor(latency).toString().length
        const base = Math.pow(10, digits - 1)
        return Math.floor(latency / base) * base
    }
}

in config.json:

Technically, we don't need this config file. Instead, we could read the actual URLs from HAProxy admin socket directly. Though creating a JSON file that contains URLs is much simpler.

{
    "admin": {
        "host": "127.0.0.1",
        "port": 9999
    },
    "servers": [
        {
            "backendName": "rpc-backend",
            "serverName": "quicknode",
            "serverUrl": "https://xxx.quiknode.pro/xxx"
        },
        {
            "backendName": "rpc-backend",
            "serverName": "alchemy",
            "serverUrl": "https://xxx.alchemy.com/xxx"
        }
    ]
}

ref:
https://www.haproxy.com/documentation/hapee/latest/api/runtime-api/set-weight/
https://sleeplessbeastie.eu/2020/01/29/how-to-use-haproxy-stats-socket/

Deployments

apiVersion: v1
kind: ConfigMap
metadata:
  name: rpc-proxy-config-file
data:
  haproxy.cfg: |
    ...
  config.json: |
    ...
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: rpc-proxy
spec:
  replicas: 2
  selector:
    matchLabels:
      app: rpc-proxy
  template:
    metadata:
      labels:
        app: rpc-proxy
    spec:
      volumes:
        - name: rpc-proxy-config-file
          configMap:
            name: rpc-proxy-config-file
      containers:
        - name: haproxy
          image: haproxy:2.7.0
          ports:
            - containerPort: 8000
              protocol: TCP
          resources:
            requests:
              cpu: 200m
              memory: 256Mi
            limits:
              cpu: 1000m
              memory: 256Mi
          volumeMounts:
            - name: rpc-proxy-config-file
              subPath: haproxy.cfg
              mountPath: /usr/local/etc/haproxy/haproxy.cfg
              readOnly: true
        - name: node-weighter
          image: your-node-weighter
          command: ["node", "./index.js"]
          resources:
            requests:
              cpu: 200m
              memory: 256Mi
            limits:
              cpu: 1000m
              memory: 256Mi
          volumeMounts:
            - name: rpc-proxy-config-file
              subPath: config.json
              mountPath: /path/to/build/config.json
              readOnly: true
---
apiVersion: v1
kind: Service
metadata:
  name: rpc-proxy
spec:
  clusterIP: None
  selector:
    app: rpc-proxy
  ports:
    - name: http
      port: 8000
      targetPort: 8000

The RPC load balancer can then be accessed through http://rpc-proxy.default.svc.cluster.local:8000 inside the Kubernetes cluster.

ref:
https://www.containiq.com/post/kubernetes-sidecar-container
https://hub.docker.com/_/haproxy

Amazon EKS: Setup Cluster Autoscaler

Amazon EKS: Setup Cluster Autoscaler

The Kubernetes's Cluster Autoscaler automatically adjusts the number of nodes in your cluster when pods fail or are rescheduled onto other nodes. Here we're going to deploy the AWS implementation that implements the decisions of Cluster Autoscaler by communicating with AWS products and services such as Amazon EC2.

ref:
https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html

Configure IAM Permissions

If your existing node groups were created with eksctl create nodegroup --asg-access, then this policy already exists and you can skip this step.

in cluster-autoscaler-policy-staging.json

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "autoscaling:SetDesiredCapacity",
                "autoscaling:TerminateInstanceInAutoScalingGroup"
            ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "aws:ResourceTag/k8s.io/cluster-autoscaler/perp-staging": "owned"
                }
            }
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "autoscaling:DescribeAutoScalingGroups",
                "autoscaling:DescribeAutoScalingInstances",
                "autoscaling:DescribeLaunchConfigurations",
                "autoscaling:DescribeTags",
                "autoscaling:SetDesiredCapacity",
                "autoscaling:TerminateInstanceInAutoScalingGroup",
                "ec2:DescribeLaunchTemplateVersions"
            ],
            "Resource": "*"
        }
    ]
}
aws --profile perp iam create-policy \
  --policy-name AmazonEKSClusterAutoscalerPolicyStaging \
  --policy-document file://cluster-autoscaler-policy-staging.json

eksctl --profile=perp create iamserviceaccount \
  --cluster=perp-staging \
  --namespace=kube-system \
  --name=cluster-autoscaler \
  --attach-policy-arn=arn:aws:iam::xxx:policy/AmazonEKSClusterAutoscalerPolicy \
  --override-existing-serviceaccounts \
  --approve

ref:
https://docs.aws.amazon.com/eks/latest/userguide/autoscaling.html

Deploy Cluster Autoscaler

Download the deployment yaml of cluster-autoscaler:

curl -o cluster-autoscaler-autodiscover.yaml \
https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml

Before you apply the file, it's recommended to check whether the version of cluster-autoscaler matches the Kubernetes major and minor version of your cluster. Find the version number on GitHub releases.

kubectl apply -f cluster-autoscaler-autodiscover.yaml
# or
kubectl set image deployment cluster-autoscaler \
  -n kube-system \
  cluster-autoscaler=k8s.gcr.io/autoscaling/cluster-autoscaler:v1.21.3

Do some tweaks:

  • --balance-similar-node-groups ensures that there is enough available compute across all availability zones.
  • --skip-nodes-with-system-pods=false ensures that there are no problems with scaling to zero.
kubectl patch deployment cluster-autoscaler \
  -n kube-system \
  -p '{"spec":{"template":{"metadata":{"annotations":{"cluster-autoscaler.kubernetes.io/safe-to-evict": "false"}}}}}'

kubectl -n kube-system edit deployment.apps/cluster-autoscaler
# change the command to the following:
# spec:
#   containers:
#   - command
#     - ./cluster-autoscaler
#     - --v=4
#     - --stderrthreshold=info
#     - --cloud-provider=aws
#     - --skip-nodes-with-local-storage=false
#     - --expander=least-waste
#     - --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/perp-staging
#     - --balance-similar-node-groups
#     - --skip-nodes-with-system-pods=false

ref:
https://aws.github.io/aws-eks-best-practices/cluster-autoscaling/
https://github.com/kubernetes/autoscaler/tree/master/charts/cluster-autoscaler#additional-configuration

Amazon EKS: Setup aws-load-balancer-controller for Kubernetes Ingress

Amazon EKS: Setup aws-load-balancer-controller for Kubernetes Ingress

AWS Load Balancer Controller replaces the functionality of the AWS ALB Ingress Controller.

Install aws-load-balancer-controller

Create an IAM OIDC provider for your cluster

eksctl utils associate-iam-oidc-provider --profile=perp \
  --region ap-northeast-1 \
  --cluster perp-staging \
  --approve

ref:
https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html

Create a Kubernetes ServiceAccount for the Controller

curl -o iam_policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.3.0/docs/install/iam_policy.json

aws iam create-policy --profile=perp \
  --policy-name AWSLoadBalancerControllerIAMPolicy \
  --policy-document file://iam_policy.json

eksctl create iamserviceaccount --profile=perp \
  --cluster=perp-staging \
  --namespace=kube-system \
  --name=aws-load-balancer-controller \
  --attach-policy-arn=arn:aws:iam::XXX:policy/AWSLoadBalancerControllerIAMPolicy \
  --override-existing-serviceaccounts \
  --approve

re:
https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/deploy/installation/

Deploy aws-load-balancer-controller

helm repo add eks https://aws.github.io/eks-charts

helm ls -A

helm upgrade -i \
  aws-load-balancer-controller eks/aws-load-balancer-controller \
  -n kube-system \
  --set clusterName=perp-staging \
  --set serviceAccount.create=false \
  --set serviceAccount.name=aws-load-balancer-controller \
  --set region=ap-northeast-1 \
  --set vpcId=vpc-XXX

ref:
https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html#helm-v3-or-later
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/deploy/installation/
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/deploy/configurations/
https://github.com/aws/eks-charts/tree/master/stable/aws-load-balancer-controller

Create an Ingress using ALB

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: graph-node-ingress
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-FS-1-2-Res-2020-10
    alb.ingress.kubernetes.io/certificate-arn: YOUR_ACM_ARN
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=60
    alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=3600,deletion_protection.enabled=true
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]'
    alb.ingress.kubernetes.io/actions.graph-node-jsonrpc: >
      {"type": "forward", "forwardConfig": {"targetGroups": [
        {"serviceName": "graph-node", "servicePort": 8000, "weight": 100}
      ]}}
    alb.ingress.kubernetes.io/actions.graph-node-websocket: >
      {"type": "forward", "forwardConfig": {"targetGroups": [
        {"serviceName": "graph-node", "servicePort": 8001, "weight": 100}
      ]}}
spec:
  rules:
  - host: "subgraph-api.example.com"
    http:
      paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: graph-node-jsonrpc
              port:
                name: use-annotation
  - host: "subgraph-ws.example.com"
    http:
      paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: graph-node-websocket
              port:
                name: use-annotation

ref:
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/guide/ingress/annotations/

kubectl apply -f graph-node-ingress.yaml -R

Then create Route 53 records for the above domains, and point them to the newly created ALB.