Amazon EKS: Setup aws-load-balancer-controller for Kubernetes Ingress

Amazon EKS: Setup aws-load-balancer-controller for Kubernetes Ingress

AWS Load Balancer Controller replaces the functionality of the AWS ALB Ingress Controller.

Install aws-load-balancer-controller

Create an IAM OIDC provider for your cluster

eksctl utils associate-iam-oidc-provider --profile=perp \
  --region ap-northeast-1 \
  --cluster perp-staging \
  --approve

ref:
https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html

Create a Kubernetes ServiceAccount for the Controller

curl -o iam_policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.3.0/docs/install/iam_policy.json

aws iam create-policy --profile=perp \
  --policy-name AWSLoadBalancerControllerIAMPolicy \
  --policy-document file://iam_policy.json

eksctl create iamserviceaccount --profile=perp \
  --cluster=perp-staging \
  --namespace=kube-system \
  --name=aws-load-balancer-controller \
  --attach-policy-arn=arn:aws:iam::XXX:policy/AWSLoadBalancerControllerIAMPolicy \
  --override-existing-serviceaccounts \
  --approve

re:
https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/deploy/installation/

Deploy aws-load-balancer-controller

helm repo add eks https://aws.github.io/eks-charts

helm ls -A

helm upgrade -i \
  aws-load-balancer-controller eks/aws-load-balancer-controller \
  -n kube-system \
  --set clusterName=perp-staging \
  --set serviceAccount.create=false \
  --set serviceAccount.name=aws-load-balancer-controller \
  --set region=ap-northeast-1 \
  --set vpcId=vpc-XXX

ref:
https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html#helm-v3-or-later
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/deploy/installation/
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/deploy/configurations/
https://github.com/aws/eks-charts/tree/master/stable/aws-load-balancer-controller

Create an Ingress using ALB

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: graph-node-ingress
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-FS-1-2-Res-2020-10
    alb.ingress.kubernetes.io/certificate-arn: YOUR_ACM_ARN
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=60
    alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=3600,deletion_protection.enabled=true
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]'
    alb.ingress.kubernetes.io/actions.graph-node-jsonrpc: >
      {"type": "forward", "forwardConfig": {"targetGroups": [
        {"serviceName": "graph-node", "servicePort": 8000, "weight": 100}
      ]}}
    alb.ingress.kubernetes.io/actions.graph-node-websocket: >
      {"type": "forward", "forwardConfig": {"targetGroups": [
        {"serviceName": "graph-node", "servicePort": 8001, "weight": 100}
      ]}}
spec:
  rules:
  - host: "subgraph-api.example.com"
    http:
      paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: graph-node-jsonrpc
              port:
                name: use-annotation
  - host: "subgraph-ws.example.com"
    http:
      paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: graph-node-websocket
              port:
                name: use-annotation

ref:
https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/guide/ingress/annotations/

kubectl apply -f graph-node-ingress.yaml -R

Then create Route 53 records for the above domains, and point them to the newly created ALB.

The Graph: Subgraph and GRT Token

The Graph: Subgraph and GRT Token

The Graph is a protocol for indexing and querying Blockchain data. Currently, The Graph has the legacy version and the decentralized version. The legacy version is a centralized and managed service hosted by The Graph, and it will eventually be shutdown in the future. The decentralized version (aka The Graph Network) consists of 4 major roles: Developers, Indexers, Curators, and Delegators.

ref:
https://thegraph.com/

Video from Finematics
https://finematics.com/the-graph-explained/

Terminologies

Define a Subgraph

A subgraph defines one or more entities of indexed data: what kind of data on the blockchain you want to index for faster querying. Once deployed, subgraphs could be queried by dApps to fetch blockchain data to power their frontend interfaces. Basically, a subgraph is like a database, and an entity (a GraphQL type) is like a table in RDBMS.

A subgraph definition consists of 3 files:

  • subgraph.yaml: a YAML file that defines the subgraph manifest and metadata.
  • schema.graphql: a GraphQL schema that defines what data (entities) are stored, and how to query it via GraphQL.
  • mappings.ts: AssemblyScript code that translates blockchain data (events, blocks, or contract calls) to GraphQL entities.

GraphQL Schema

First, we need to design our GraphQL entity schemas (the data model) which mainly depends on how you want to query the data instead of how the data are emitted from the blockchain. GraphQL schemas are defined using the GraphQL schema language.

Here're some notes about subgraph's GraphQL schema:

An example of schema.graphql:

type Market @entity {
  id: ID!
  baseToken: Bytes!
  pool: Bytes!
  feeRatio: BigInt!
  tradingFee: BigDecimal!
  tradingVolume: BigDecimal!
  blockNumberAdded: BigInt!
  timestampAdded: BigInt!
}

type Trader @entity {
  id: ID!
  realizedPnl: BigDecimal!
  fundingPayment: BigDecimal!
  tradingFee: BigDecimal!
  badDebt: BigDecimal!
  totalPnl: BigDecimal!
  positions: [Position!]! @derivedFrom(field: "traderRef")
}

type Position @entity {
  id: ID!
  trader: Bytes!
  baseToken: Bytes!
  positionSize: BigDecimal!
  openNotional: BigDecimal!
  openPrice: BigDecimal!
  realizedPnl: BigDecimal!
  tradingFee: BigDecimal!
  badDebt: BigDecimal!
  totalPnl: BigDecimal!
  traderRef: Trader!
}

ref:
https://thegraph.com/docs/developer/create-subgraph-hosted#the-graphql-schema
https://thegraph.com/docs/developer/graphql-api
https://graphql.org/learn/schema/

It's also worth noting that The Graph supports Time-travel queries. We can query the state of your entities for an arbitrary block in the past:

{
  positions(
    block: {
      number: 1234567
    },
    where: {
      trader: "0x5abfec25f74cd88437631a7731906932776356f9"
    }
  ) {
    id
    trader
    baseToken
    positionSize
    openNotional
    openPrice
    realizedPnl
    fundingPayment
    tradingFee
    badDebt
    totalPnl
    blockNumber
    timestamp
  }
}

ref:
https://thegraph.com/docs/developer/graphql-api#time-travel-queries

Subgraph Manifest

Second, we must provide a manifest to tell The Graph which contracts we would like to listen to, which contract events we want to index. Also a mapping file that instructs The Graph on how to transform blockchain data into GraphQL entities.

A template file of subgraph.yaml:

specVersion: 0.0.2
description: Test Subgraph
repository: https://github.com/vinta/my-subgraph
schema:
  file: ./schema.graphql
dataSources:
  - kind: ethereum/contract
    name: ClearingHouse
    network: {{ network }}
    source:
      abi: ClearingHouse
      address: {{ clearingHouse.address }}
      startBlock: {{ clearingHouse.startBlock }}
    mapping:
      kind: ethereum/events
      apiVersion: 0.0.4
      language: wasm/assemblyscript
      file: ./src/mappings/clearingHouse.ts
      entities:
        - Protocol
        - Market
        - Trader
        - Position
      abis:
        - name: ClearingHouse
          file: ./abis/ClearingHouse.json
      eventHandlers:
        - event: PoolAdded(indexed address,indexed uint24,indexed address)
          handler: handlePoolAdded
        - event: PositionChanged(indexed address,indexed address,int256,int256,uint256,int256,uint256)
          handler: handlePositionChanged

Since we would usually deploy our contracts to multiple chains (at least one for mainnet and one for testnet), so we could use a template engine (like mustache.js) to facilate deployment.

$ cat configs/arbitrum-rinkeby.json
{
    "network": "arbitrum-rinkeby",
    "clearingHouse": {
        "address": "0xYourContractAddress",
        "startBlock": 1234567
    }
}

# generate the subgraph manifest for different networks
$ mustache configs/arbitrum-rinkeby.json subgraph.template.yaml > subgraph.yaml
$ mustache configs/arbitrum-one.json subgraph.template.yaml > subgraph.yaml

It's worth noting that The Graph Legacy (the Hosted Service) supports most of common networks, for instance, mainnet, rinkeby, bsc, matic, arbitrum-one, and optimism. However, The Graph Network (the decentralized version) only supports Ethereum mainnet and rinkeby.

You could find the full list of supported networks on the document:
https://thegraph.com/docs/developer/create-subgraph-hosted#from-an-existing-contract

Mappings

Mappings are written in AssemblyScript and will be compiled to WebAssembly (WASM) when deploying. AssemblyScript's syntax is similar to TypeScript, but it's actually a completely different language.

For each event handler defined in subgraph.yaml under mapping.eventHandlers, we must create an exported function of the same name. What we do in a event handler is basically:

  1. Creating new entities or loading existed ones by id.
  2. Updating fields of entities from a blockchain event.
  3. Saving entities to The Graph.
    • It's not necessary to load an entity before updating it. It's fine to simply create the entity, set properties, then save. If the entity already exists, changes will be merged automatically.
export function handlePoolAdded(event: PoolAdded): void {
    // upsert Protocol
    const protocol = getOrCreateProtocol()
    protocol.publicMarketCount = protocol.publicMarketCount.plus(BI_ONE)

    // upsert Market
    const market = getOrCreateMarket(event.params.baseToken)
    market.baseToken = event.params.baseToken
    market.pool = event.params.pool
    market.feeRatio = BigInt.fromI32(event.params.feeRatio)
    market.blockNumberAdded = event.block.number
    market.timestampAdded = event.block.timestamp

    // commit changes
    protocol.save()
    market.save()
}

export function handlePositionChanged(event: Swapped): void {
    // upsert Market
    const market = getOrCreateMarket(event.params.baseToken)
    market.tradingFee = market.tradingFee.plus(swappedEvent.fee)
    market.tradingVolume = market.tradingVolume.plus(abs(swappedEvent.exchangedPositionNotional))
    ...

    // upsert Trader
    const trader = getOrCreateTrader(event.params.trader)
    trader.tradingFee = trader.tradingFee.plus(swappedEvent.fee)
    trader.realizedPnl = trader.realizedPnl.plus(swappedEvent.realizedPnl)
    ...

    // upsert Position
    const position = getOrCreatePosition(event.params.trader, event.params.baseToken)
    const side = swappedEvent.exchangedPositionSize.ge(BD_ZERO) ? Side.BUY : Side.SELL
    ...

    // commit changes
    market.save()
    trader.save()
    position.save()
}

We can also access contract states and call contract functions at the current block (even ). Though the functionality of calling contract functions is limited by @graphprotocol/graph-ts, it's not as powerful as libraries like ethers.js. And no, we cannot import ethers.js in mappings, as mappings are written in AssemblyScript. However, contract calls are quite "expensive" in terms of indexing performance. In extreme cases, some indexers might avoid syncing a very slow subgraph, or charge a premium for serving queries.

export function handlePoolAdded(event: PoolAdded): void {
    ...
    const pool = UniswapV3Pool.bind(event.params.pool)
    market.poolTickSpacing = pool.tickSpacing()
    ...
}

ref:
https://thegraph.com/docs/developer/create-subgraph-hosted#writing-mappings
https://thegraph.com/docs/developer/assemblyscript-api

In addition to event handlers, we're also able to define call handlers and block handlers. A call handler listens to a specific contract function call, and receives input and output of the call as the handler argument. On the contrary, a block handler will be called after every block or after blocks that match a predefined filter - for every block which contains a call to the contract listed in dataSources.

ref:
https://thegraph.com/docs/developer/create-subgraph-hosted#defining-a-call-handler
https://thegraph.com/docs/developer/create-subgraph-hosted#block-handlers

Here're references to how other projects organize their subgraphs:
https://github.com/Uniswap/uniswap-v3-subgraph
https://github.com/Synthetixio/synthetix-subgraph
https://github.com/mcdexio/mai3-perpetual-graph

Deploy a Subgraph

Deploy to Legacy Explorer

Before deploying your subgraph to the Legacy Explorer (the centralized and hosted version of The Graph), we need to create it on the Legacy Explorer dashboard.

Then run the following commands to deploy:

$ mustache configs/arbitrum-rinkeby.json subgraph.template.yaml > subgraph.yaml

$ graph auth --product hosted-service <YOUR_THE_GRAPH_ACCESS_TOKEN>
$ graph deploy --product hosted-service <YOUR_GITHUB_USERNAME>/<YOUR_SUBGRAPH_REPO>

ref:
https://thegraph.com/docs/developer/deploy-subgraph-hosted

Deploy to Subgraph Studio

When we deploy a subgraph to Subgraph Studio (the decentralized version of The Graph), we just push it to the Studio where we're able to test it. Versus, when we "publish" a subgraph in Subgraph Studio, we are publishing it on-chain. Unfortunately, Subgraph Studio only supports Ethereum Mainnet and Rinkeby testnet currently.

$ graph auth --studio <YOUR_SUBGRAPH_DEPLOY_KEY>
$ graph deploy --studio <YOUR_SUBGRAPH_SLUG>

ref:
https://thegraph.com/docs/developer/deploy-subgraph-studio

Token Economics

Before we talk about the token economics of GRT token, it's important to know that the following description only applies to The Graph Network, the decentralized version of The Graph. Also, the name of The Graph Network is a bit ambiguous, it is not a new network or a new blockchain, instead, it is a web service that charges HTTP/WebSocket API calls in GRT token.

To make GRT token somehow valuable, when you query data (through GraphQL APIs) from The Graph Network, you need to pay for each query in GRT. First, you have to connect your wallet and create an account on Subgraph Studio to obtain an API key, then you deposit some GRT tokens into the account's billing balance on Polygon since their billing contract is built on Polygon. At the end of each week, if you used your API keys to query data, you will receive an invoice based on the query fees you have generated during this period. This invoice will be paid using GRT available in your balance.

ref:
https://thegraph.com/docs/studio/billing

When it comes to token economics:

  • Indexers earn query fees and indexing rewards. GRT would be slashed if indexers are malicious or serve incorrect data. Though, there's no documentation about how exactly slashing works.
  • Delegators earn a portion of query fees and indexing rewards by delegating GRT to existing indexers.
  • Curators earn a portion of query fees for the subgraphs they signal on by depositing GRT into a bonding curve of a specific subgraph.

ref:
https://thegraph.com/blog/the-graph-grt-token-economics
https://thegraph.com/blog/the-graph-network-in-depth-part-1
https://thegraph.com/blog/the-graph-network-in-depth-part-2

Query Fee

The price of queries will be set by indexers and vary based on cost to index the subgraph, the demand for queries, the amount of curation signal and the market rate for blockchain queries. Though querying data from the hosted version of The Graph is free now.

The Graph has developed a Cost Model (Agora) for pricing queries, and there is also a microtransaction system (Scalar) that uses state channels to aggregate and compress transactions before being finalized on-chain.

ref:
https://github.com/graphprotocol/agora
https://thegraph.com/blog/scalar

Amazon EKS: Setup kubernetes-external-secrets with AWS Secret Manager

Amazon EKS: Setup kubernetes-external-secrets with AWS Secret Manager

kubernetes-external-secrets allows you to use external secret management systems, like AWS Secrets Manager, to securely add Secrets in Kubernetes, so Pods can access Secrets normally.

ref:
https://github.com/external-secrets/kubernetes-external-secrets

AWS Secret Manager

For instance, we create a secret named YOUR_SECRET on AWS Secret Manager in the same region as our EKS cluster, using DefaultEncryptionKey as the encryption key. The content of the secret entity look like:

{
  "KEY_1": "VALUE_1",
  "KEY_2": "VALUE_2",
}

We can retrieve the secret value:

aws secretsmanager get-secret-value --profile=perp \
--region ap-northeast-1 \
--secret-id YOUR_SECRET

kubernetes-external-secrets

For kubernetes-external-secrets to work properly, it must be granted access to AWS Secrets Manager. To achieve that, we need to create an IAM role for kubernetes-external-secrets' service account.

ref:
https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html

Configure Secrets Backends

Create an IAM OIDC provider for the cluster:

eksctl utils associate-iam-oidc-provider --profile=perp \
--region ap-northeast-1 \
--cluster perp-staging \
--approve

aws iam list-open-id-connect-providers --profile=perp

ref:
https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html

Create an IAM policy that allows the role to access all secrets we created on AWS Secret Manager:

AWS_ACCOUNT_ID=$(aws sts get-caller-identity --profile=perp --query "Account" --output text)

cat <<EOF > policy.json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "secretsmanager:GetResourcePolicy",
        "secretsmanager:GetSecretValue",
        "secretsmanager:DescribeSecret",
        "secretsmanager:ListSecretVersionIds"
      ],
      "Resource": [
        "arn:aws:secretsmanager:ap-northeast-1:${AWS_ACCOUNT_ID}:secret:*"
      ]
    }
  ]
}
EOF

aws iam create-policy --profile=perp \
--policy-name perp-staging-secrets-policy --policy-document file://policy.json

Attach the above IAM policy to an IAM role, and define AssumeRole for the service account external-secrets-kubernetes-external-secrets which will be created later:

AWS_ACCOUNT_ID=$(aws sts get-caller-identity --profile=perp --query "Account" --output text)
OIDC_PROVIDER=$(aws eks describe-cluster --profile=perp --name perp-staging --region ap-northeast-1 --query "cluster.identity.oidc.issuer" --output text | sed -e "s/^https:\/\///") 

cat <<EOF > trust.json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "${OIDC_PROVIDER}:aud": "sts.amazonaws.com",
          "${OIDC_PROVIDER}:sub": "system:serviceaccount:default:external-secrets-kubernetes-external-secrets"
        }
      }
    }
  ]
}
EOF

aws iam create-role --profile=perp \
--role-name perp-staging-secrets-role \
--assume-role-policy-document file://trust.json

aws iam attach-role-policy --profile=perp \
--role-name perp-staging-secrets-role \
--policy-arn YOUR_POLICY_ARN

ref:
https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html
https://gist.github.com/lukaszbudnik/f1f42bd5a57430e3c25034200ba44c2e

Deploy kubernetes-external-secrets Controller

helm repo add external-secrets https://external-secrets.github.io/kubernetes-external-secrets/

helm install external-secrets \
external-secrets/kubernetes-external-secrets \
--skip-crds \
--set env.AWS_REGION=ap-northeast-1 \
--set securityContext.fsGroup=65534 \
--set serviceAccount.annotations."eks\.amazonaws\.com/role-arn"='YOUR_ROLE_ARN'

helm list

It would automatically create a service account named external-secrets-kubernetes-external-secrets in Kubernetes.

ref:
https://github.com/external-secrets/kubernetes-external-secrets/tree/master/charts/kubernetes-external-secrets

Deploy ExternalSecret

ExternalSecret app-secrets will generate a Secret object with the same name, and the content would look like:

apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
  name: example-secret
spec:
  backendType: secretsManager
  region: ap-northeast-1
  dataFrom:
    - YOUR_SECRET
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: example-app
  template:
    metadata:
      labels:
        app: example-app
    spec:
      containers:
      - name: example-app
        image: busybox:latest
        envFrom:
        - secretRef:
            name: example-secret
kubectl get secret example-secret -o jsonpath="{.data.KEY_1}" | base64 --decode

ref:
https://gist.github.com/lukaszbudnik/f1f42bd5a57430e3c25034200ba44c2e

Amazon EKS: Manage Kubernetes Cluster with ClusterConfig (2025 Version)

Amazon EKS: Manage Kubernetes Cluster with ClusterConfig (2025 Version)

Amazon Elastic Kubernetes Service (EKS) is a managed Kubernetes as a Service on AWS === Google Kubernetes Engine (GKE) on Google Cloud.

ref:
https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html
https://vinta.ws/code/the-complete-guide-to-google-kubernetes-engine-gke.html

Install CLI Tools

You need to install some command-line tools. Moreover, it would be better that you use the same Kubernetes version between client and server. Otherwise, you will get something like: WARNING: version difference between client (1.31) and server (1.33) exceeds the supported minor version skew of +/-1.

# kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/$(arch)/kubectl"
curl -LO "https://dl.k8s.io/release/v1.32.7/bin/darwin/arm64/kubectl"
sudo install -m 0755 kubectl /usr/local/bin

# eksctl
curl -LO "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_$(arch).tar.gz"
tar -xzf "eksctl_$(uname -s)_$(arch).tar.gz"
sudo install -m 0755 eksctl /usr/local/bin

# awscli
curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"
sudo installer -pkg AWSCLIV2.pkg -target /

ref:
https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/
https://github.com/eksctl-io/eksctl
https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html

The following tools areĀ also recommended which provide fancy terminal UIs to interact with your Kubernetes clusters.

# k9s
brew install derailed/k9s/k9s

# fubectl
curl -LO https://rawgit.com/kubermatic/fubectl/master/fubectl.source
source <path-to>/fubectl.source

ref:
https://github.com/derailed/k9s
https://github.com/kubermatic/fubectl

Create EKS Cluster

Use ClusterConfig to define the cluster.

# https://eksctl.io/usage/schema/
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: your-cluster
  region: ap-northeast-1
  version: "1.33"

# https://eksctl.io/usage/fargate-support/
fargateProfiles:
  - name: fp-default
    selectors:
      # all workloads in "fargate" Kubernetes namespace will be scheduled onto Fargate
      - namespace: fargate

# https://eksctl.io/usage/nodegroup-managed/
managedNodeGroups:
  - name: mng-m5-xlarge
    instanceType: m5.xlarge
    spot: true
    minSize: 1
    maxSize: 10
    desiredCapacity: 2
    volumeSize: 100
    nodeRepairConfig:
      enabled: true
    iam:
      withAddonPolicies:
        autoScaler: true
        awsLoadBalancerController: true
        cloudWatch: true
        ebs: true
      attachPolicyARNs:
        # default node policies (required when explicitly setting attachPolicyARNs)
        - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
        - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
        - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryPullOnly
        # custom policies
        - arn:aws:iam::xxx:policy/your-custom-policy
    labels:
      service-node: "true"

# https://eksctl.io/usage/cloudwatch-cluster-logging/
cloudWatch:
  clusterLogging:
    enableTypes: ["api", "audit", "authenticator"]
    logRetentionInDays: 30

# https://eksctl.io/usage/addons/
addons:
  # networking addons are enabled by default
  # - name: kube-proxy
  # - name: coredns
  # - name: vpc-cni
  - name: amazon-cloudwatch-observability
  - name: aws-ebs-csi-driver
  - name: eks-pod-identity-agent
  - name: metrics-server
addonsConfig:
  autoApplyPodIdentityAssociations: true

# https://eksctl.io/usage/iamserviceaccounts/
iam:
  # not all addons support Pod Identity Association, so you probably still need OIDC
  withOIDC: true

# https://eksctl.io/usage/kms-encryption/
secretsEncryption:
  keyARN: arn:aws:kms:YOUR_ARN
# preview
AWS_PROFILE=perp eksctl create cluster -f cluster-config.yaml --dry-run

# create
eksctl create cluster -f cluster-config.yaml --profile=perp

If a nodegroup includes attachPolicyARNs, it must also include the default node policies, like AmazonEKSWorkerNodePolicy, AmazonEKS_CNI_Policy and AmazonEC2ContainerRegistryPullOnly.

ref:
https://eksctl.io/usage/schema/
https://eksctl.io/usage/iam-policies/
https://github.com/weaveworks/eksctl/tree/main/examples

Managed Nodegroups

You could re-use ClusterConfig to create more managed nodegroups after cluster creation. However, you can only create: most fields under managedNodeGroups section cannot be changed once created. If you need to tweak something, just add a new one.

Also, if you're not familiar with instance types on AWS, try Instance Selector which you only need to specify how much CPU, memory and GPU you want.

managedNodeGroups:
  - name: mng-spot-2c4g
    instanceSelector:
      vCPUs: 2
      memory: 4GiB
      gpus: 0
    spot: true
    minSize: 1
    maxSize: 5
    desiredCapacity: 2
    volumeSize: 100
    nodeRepairConfig:
      enabled: true
eksctl create nodegroup -f cluster-config.yaml --profile=perp

ref:
https://eksctl.io/usage/nodegroup-managed/
https://eksctl.io/usage/instance-selector/

Addons

When a cluster is created, EKS automatically installs vpc-cni, coredns and kube-proxy as self-managed addons. Here are other common addons you probably want to install as well:

  • eks-pod-identity-agent: Manages IAM permissions for pods using Pod Identity Associations instead of OIDC
  • amazon-cloudwatch-observability: Collects and sends container metrics, logs, and traces to CloudWatch Container Insights (Logs Insights)
  • aws-ebs-csi-driver: Enables pods to use EBS volumes for persistent storage through Kubernetes PersistentVolumes
  • metrics-server: Provides resource metrics (CPU/memory) for HPA, VPA and kubectl top commands

If not specified explicitly, addons will be created with a role that has all recommended policies attached.

It's worth noting that EKS Pod Identity Associations is AWS's newer, simpler way for Kubernetes pods to assume IAM roles, replacing the older IRSA (IAM Roles for Service Accounts) method.

ref:
https://eksctl.io/usage/addons/
https://eksctl.io/usage/pod-identity-associations/
https://eksctl.io/usage/iamserviceaccounts/

IAM Permissions

Pods run on regular nodes will behave as NodeInstanceRole; pods run on Fargate nodes will use FargatePodExecutionRole. If you would like to adjust IAM permissions for a nodegroup, use the following command to find out which IAM role it's using:

aws eks describe-nodegroup \
    --region ap-northeast-1 \
    --cluster-name your-cluster \
    --nodegroup-name mng-m5-xlarge \
    --query 'nodegroup.nodeRole' \
    --output text \
    --profile=perp

ref:
https://docs.aws.amazon.com/eks/latest/userguide/create-node-role.html

Access EKS Cluster

Instead of using the old aws-auth ConfigMap, you could use Access Entries to manage users/roles between AWS IAM and Kubernetes. Though, the user who running eksctl create cluster should remove him/her self from the access entries, EKS will add the user automatically.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: your-cluster
  region: ap-northeast-1

accessConfig:
  authenticationMode: API
  accessEntries:
    # - principalARN: arn:aws:iam::xxx:user/user1
    #   accessPolicies:
    #     - policyARN: arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy
    #       accessScope:
    #         type: cluster
    - principalARN: arn:aws:iam::xxx:user/user2
      accessPolicies:
        - policyARN: arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminPolicy
          accessScope:
            type: cluster
    - principalARN: arn:aws:iam::xxx:user/user3
      accessPolicies:
        - policyARN: arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy
          accessScope:
            type: cluster
    - principalARN: arn:aws:iam::xxx:user/production-eks-deploy
      type: STANDARD
# list
eksctl get accessentry --region ap-northeast-1 --cluster your-cluster --profile perp

# create
eksctl create accessentry -f access-entries.yaml --profile perp

ref:
https://eksctl.io/usage/access-entries/
https://docs.aws.amazon.com/eks/latest/userguide/access-policies.html

Connect kubectl to your EKS cluster by creating a kubeconfig file.

# setup context for a cluster
aws eks update-kubeconfig --region ap-northeast-1 --name your-cluster --alias your-cluster --profile=perp

# switch cluster context
kubectl config use-context your-cluster

kubectl get nodes

cat ~/.kube/config

ref:
https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html

Upgrade EKS Cluster

After upgrading your EKS cluster on AWS Management Console, you might find kube-proxy is still using the old image from previous Kubernetes - just upgrade them manually. Or if you are going to create an ARM-based nodegroup, you might also need to upgrade core addons first:

eksctl utils update-aws-node --region ap-northeast-1 --cluster your-cluster --approve --profile=perp
eksctl utils update-coredns --region ap-northeast-1 --cluster your-cluster --approve --profile=perp
eksctl utils update-kube-proxy --region ap-northeast-1 --cluster your-cluster --approve --profile=perp

kubectl get pods -n kube-system

If you got Error: ImagePullBackOff, try the following commands and replace the version (and region) with the version listed in Latest available kube-proxy container image version for each Amazon EKS cluster version:

kubectl describe daemonset kube-proxy -n kube-system | grep Image
kubectl set image daemonset.apps/kube-proxy -n kube-system \
kube-proxy=602401143452.dkr.ecr.ap-northeast-1.amazonaws.com/eks/kube-proxy:v1.25.16-minimal-eksbuild.8

ref:
https://eksctl.io/usage/cluster-upgrade/
https://github.com/weaveworks/eksctl/issues/1088

Delete EKS Cluster

You might need to manually delete/detach following resources in order to delete the cluster:

  • Detach non-default policies fromĀ NodeInstanceRole and FargatePodExecutionRole
  • Fargate Profile
  • EC2 Network Interfaces
  • EC2 Load Balancer
eksctl delete cluster --region ap-northeast-1 --name your-cluster --disable-nodegroup-eviction --profile=perp
Fix "No Access-Control-Allow-Origin header" for S3 and CloudFront

Fix "No Access-Control-Allow-Origin header" for S3 and CloudFront

To avoid the error "No 'Access-Control-Allow-Origin' header is present on the requested resource":

  • Enable CORS on your S3 bucket
  • Forward the appropriate headers on your CloudFront distribution

Enable CORS on S3 Bucket

In S3 -> [your bucket] -> Permissions -> Cross-origin resource sharing (CORS):

[
    {
        "AllowedHeaders": [
            "*"
        ],
        "AllowedMethods": [
            "GET"
        ],
        "AllowedOrigins": [
            "*"
        ],
        "ExposeHeaders": []
    }
]

ref:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/cors.html
https://docs.aws.amazon.com/AmazonS3/latest/userguide/ManageCorsUsing.html

Configure Behaviors on CloudFront Distribution

In CloudFront -> [your distribution] -> Behaviors -> Create Behavior:

  • Path Pattern: *
  • Allowed HTTP Methods: GET, HEAD, OPTIONS
  • Cached HTTP Methods: +OPTIONS
  • Origin Request Policy: Managed-CORS-S3Origin
    • This policy actually whitelists the following headers:
      • Access-Control-Request-Headers
      • Access-Control-Request-Method
      • Origin

ref:
https://aws.amazon.com/premiumsupport/knowledge-center/no-access-control-allow-origin-error/

Validate it's working:

fetch("https://metadata.perp.exchange/config.production.json")
.then((res) => res.json())
.then((out) => { console.log(out) })
.catch((err) => { throw err });