Amazon EKS: Manage Kubernetes Cluster with ClusterConfig (2025 Version)

Amazon EKS: Manage Kubernetes Cluster with ClusterConfig (2025 Version)

Amazon Elastic Kubernetes Service (EKS) is a managed Kubernetes as a Service on AWS === Google Kubernetes Engine (GKE) on Google Cloud.

ref:
https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html
https://vinta.ws/code/the-complete-guide-to-google-kubernetes-engine-gke.html

Install CLI Tools

You need to install some command-line tools. Moreover, it would be better that you use the same Kubernetes version between client and server. Otherwise, you will get something like: WARNING: version difference between client (1.31) and server (1.33) exceeds the supported minor version skew of +/-1.

# kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/$(arch)/kubectl"
curl -LO "https://dl.k8s.io/release/v1.32.7/bin/darwin/arm64/kubectl"
sudo install -m 0755 kubectl /usr/local/bin

# eksctl
curl -LO "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_$(arch).tar.gz"
tar -xzf "eksctl_$(uname -s)_$(arch).tar.gz"
sudo install -m 0755 eksctl /usr/local/bin

# awscli
curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"
sudo installer -pkg AWSCLIV2.pkg -target /

ref:
https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/
https://github.com/eksctl-io/eksctl
https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html

The following tools are also recommended which provide fancy terminal UIs to interact with your Kubernetes clusters.

# k9s
brew install derailed/k9s/k9s

# fubectl
curl -LO https://rawgit.com/kubermatic/fubectl/master/fubectl.source
source <path-to>/fubectl.source

ref:
https://github.com/derailed/k9s
https://github.com/kubermatic/fubectl

Create EKS Cluster

Use ClusterConfig to define the cluster.

# https://eksctl.io/usage/schema/
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: your-cluster
  region: ap-northeast-1
  version: "1.33"

# https://eksctl.io/usage/fargate-support/
fargateProfiles:
  - name: fp-default
    selectors:
      # all workloads in "fargate" Kubernetes namespace will be scheduled onto Fargate
      - namespace: fargate

# https://eksctl.io/usage/nodegroup-managed/
managedNodeGroups:
  - name: mng-m5-xlarge
    instanceType: m5.xlarge
    spot: true
    minSize: 1
    maxSize: 10
    desiredCapacity: 2
    volumeSize: 100
    nodeRepairConfig:
      enabled: true
    iam:
      withAddonPolicies:
        autoScaler: true
        awsLoadBalancerController: true
        cloudWatch: true
        ebs: true
      attachPolicyARNs:
        # default node policies (required when explicitly setting attachPolicyARNs)
        - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
        - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
        - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryPullOnly
        # custom policies
        - arn:aws:iam::xxx:policy/your-custom-policy
    labels:
      service-node: "true"

# https://eksctl.io/usage/cloudwatch-cluster-logging/
cloudWatch:
  clusterLogging:
    enableTypes: ["api", "audit", "authenticator"]
    logRetentionInDays: 30

# https://eksctl.io/usage/addons/
addons:
  # networking addons are enabled by default
  # - name: kube-proxy
  # - name: coredns
  # - name: vpc-cni
  - name: amazon-cloudwatch-observability
  - name: aws-ebs-csi-driver
  - name: eks-pod-identity-agent
  - name: metrics-server
addonsConfig:
  autoApplyPodIdentityAssociations: true

# https://eksctl.io/usage/iamserviceaccounts/
iam:
  # not all addons support Pod Identity Association, so you probably still need OIDC
  withOIDC: true

# https://eksctl.io/usage/kms-encryption/
secretsEncryption:
  keyARN: arn:aws:kms:YOUR_ARN
# preview
AWS_PROFILE=perp eksctl create cluster -f cluster-config.yaml --dry-run

# create
eksctl create cluster -f cluster-config.yaml --profile=perp

If a nodegroup includes attachPolicyARNs, it must also include the default node policies, like AmazonEKSWorkerNodePolicy, AmazonEKS_CNI_Policy and AmazonEC2ContainerRegistryPullOnly.

ref:
https://eksctl.io/usage/schema/
https://eksctl.io/usage/iam-policies/
https://github.com/weaveworks/eksctl/tree/main/examples

Managed Nodegroups

You could re-use ClusterConfig to create more managed nodegroups after cluster creation. However, you can only create: most fields under managedNodeGroups section cannot be changed once created. If you need to tweak something, just add a new one.

Also, if you're not familiar with instance types on AWS, try Instance Selector which you only need to specify how much CPU, memory and GPU you want.

managedNodeGroups:
  - name: mng-spot-2c4g
    instanceSelector:
      vCPUs: 2
      memory: 4GiB
      gpus: 0
    spot: true
    minSize: 1
    maxSize: 5
    desiredCapacity: 2
    volumeSize: 100
    nodeRepairConfig:
      enabled: true
eksctl create nodegroup -f cluster-config.yaml --profile=perp

ref:
https://eksctl.io/usage/nodegroup-managed/
https://eksctl.io/usage/instance-selector/

Addons

When a cluster is created, EKS automatically installs vpc-cni, coredns and kube-proxy as self-managed addons. Here are other common addons you probably want to install as well:

  • eks-pod-identity-agent: Manages IAM permissions for pods using Pod Identity Associations instead of OIDC
  • amazon-cloudwatch-observability: Collects and sends container metrics, logs, and traces to CloudWatch Container Insights (Logs Insights)
  • aws-ebs-csi-driver: Enables pods to use EBS volumes for persistent storage through Kubernetes PersistentVolumes
  • metrics-server: Provides resource metrics (CPU/memory) for HPA, VPA and kubectl top commands

If not specified explicitly, addons will be created with a role that has all recommended policies attached.

It's worth noting that EKS Pod Identity Associations is AWS's newer, simpler way for Kubernetes pods to assume IAM roles, replacing the older IRSA (IAM Roles for Service Accounts) method.

ref:
https://eksctl.io/usage/addons/
https://eksctl.io/usage/pod-identity-associations/
https://eksctl.io/usage/iamserviceaccounts/

IAM Permissions

Pods run on regular nodes will behave as NodeInstanceRole; pods run on Fargate nodes will use FargatePodExecutionRole. If you would like to adjust IAM permissions for a nodegroup, use the following command to find out which IAM role it's using:

aws eks describe-nodegroup \
    --region ap-northeast-1 \
    --cluster-name your-cluster \
    --nodegroup-name mng-m5-xlarge \
    --query 'nodegroup.nodeRole' \
    --output text \
    --profile=perp

ref:
https://docs.aws.amazon.com/eks/latest/userguide/create-node-role.html

Access EKS Cluster

Instead of using the old aws-auth ConfigMap, you could use Access Entries to manage users/roles between AWS IAM and Kubernetes. Though, the user who running eksctl create cluster should remove him/her self from the access entries, EKS will add the user automatically.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: your-cluster
  region: ap-northeast-1

accessConfig:
  authenticationMode: API
  accessEntries:
    # - principalARN: arn:aws:iam::xxx:user/user1
    #   accessPolicies:
    #     - policyARN: arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy
    #       accessScope:
    #         type: cluster
    - principalARN: arn:aws:iam::xxx:user/user2
      accessPolicies:
        - policyARN: arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminPolicy
          accessScope:
            type: cluster
    - principalARN: arn:aws:iam::xxx:user/user3
      accessPolicies:
        - policyARN: arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy
          accessScope:
            type: cluster
    - principalARN: arn:aws:iam::xxx:user/production-eks-deploy
      type: STANDARD
# list
eksctl get accessentry --region ap-northeast-1 --cluster your-cluster --profile perp

# create
eksctl create accessentry -f access-entries.yaml --profile perp

ref:
https://eksctl.io/usage/access-entries/
https://docs.aws.amazon.com/eks/latest/userguide/access-policies.html

Connect kubectl to your EKS cluster by creating a kubeconfig file.

# setup context for a cluster
aws eks update-kubeconfig --region ap-northeast-1 --name your-cluster --alias your-cluster --profile=perp

# switch cluster context
kubectl config use-context your-cluster

kubectl get nodes

cat ~/.kube/config

ref:
https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html

Upgrade EKS Cluster

After upgrading your EKS cluster on AWS Management Console, you might find kube-proxy is still using the old image from previous Kubernetes - just upgrade them manually. Or if you are going to create an ARM-based nodegroup, you might also need to upgrade core addons first:

eksctl utils update-aws-node --region ap-northeast-1 --cluster your-cluster --approve --profile=perp
eksctl utils update-coredns --region ap-northeast-1 --cluster your-cluster --approve --profile=perp
eksctl utils update-kube-proxy --region ap-northeast-1 --cluster your-cluster --approve --profile=perp

kubectl get pods -n kube-system

If you got Error: ImagePullBackOff, try the following commands and replace the version (and region) with the version listed in Latest available kube-proxy container image version for each Amazon EKS cluster version:

kubectl describe daemonset kube-proxy -n kube-system | grep Image
kubectl set image daemonset.apps/kube-proxy -n kube-system \
kube-proxy=602401143452.dkr.ecr.ap-northeast-1.amazonaws.com/eks/kube-proxy:v1.25.16-minimal-eksbuild.8

ref:
https://eksctl.io/usage/cluster-upgrade/
https://github.com/weaveworks/eksctl/issues/1088

Delete EKS Cluster

You might need to manually delete/detach following resources in order to delete the cluster:

  • Detach non-default policies from NodeInstanceRole and FargatePodExecutionRole
  • Fargate Profile
  • EC2 Network Interfaces
  • EC2 Load Balancer
eksctl delete cluster --region ap-northeast-1 --name your-cluster --disable-nodegroup-eviction --profile=perp
Fix "No Access-Control-Allow-Origin header" for S3 and CloudFront

Fix "No Access-Control-Allow-Origin header" for S3 and CloudFront

To avoid the error "No 'Access-Control-Allow-Origin' header is present on the requested resource":

  • Enable CORS on your S3 bucket
  • Forward the appropriate headers on your CloudFront distribution

Enable CORS on S3 Bucket

In S3 -> [your bucket] -> Permissions -> Cross-origin resource sharing (CORS):

[
    {
        "AllowedHeaders": [
            "*"
        ],
        "AllowedMethods": [
            "GET"
        ],
        "AllowedOrigins": [
            "*"
        ],
        "ExposeHeaders": []
    }
]

ref:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/cors.html
https://docs.aws.amazon.com/AmazonS3/latest/userguide/ManageCorsUsing.html

Configure Behaviors on CloudFront Distribution

In CloudFront -> [your distribution] -> Behaviors -> Create Behavior:

  • Path Pattern: *
  • Allowed HTTP Methods: GET, HEAD, OPTIONS
  • Cached HTTP Methods: +OPTIONS
  • Origin Request Policy: Managed-CORS-S3Origin
    • This policy actually whitelists the following headers:
      • Access-Control-Request-Headers
      • Access-Control-Request-Method
      • Origin

ref:
https://aws.amazon.com/premiumsupport/knowledge-center/no-access-control-allow-origin-error/

Validate it's working:

fetch("https://metadata.perp.exchange/config.production.json")
.then((res) => res.json())
.then((out) => { console.log(out) })
.catch((err) => { throw err });
Apex and Terraform: The easiest way to manage AWS Lambda functions

Apex and Terraform: The easiest way to manage AWS Lambda functions

AWS Lambda lets you run code without provisioning or managing servers, which is so-called Serverless or Function as a Service (FaaS).

Apex is a Go command-line tool to manage and deploy your serverless functions on AWS Lambda. Apex is also integrated with Terraform to provide cloud infrastructure management, for instance, configuring your AWS Lambda functions with Amazon API Gateway.

ref:
https://aws.amazon.com/lambda/
https://aws.amazon.com/api-gateway/
https://github.com/apex/apex

You could browse projects created in this post on GitHub:
https://github.com/vinta/pangu.space
https://github.com/CodeTengu/LambdaBaku

Install

$ curl https://raw.githubusercontent.com/apex/apex/master/install.sh | sh

ref:
https://apex.run/#installation

Initialize

It is recommended to configure your AWS credentials with awscli.

$ pip install awscli
$ aws configure

ref:
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html

To use Apex to manage Lambda functions, you have to make sure your AWS credential has minimum IAM permissions:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "iam:CreateRole",
        "iam:CreatePolicy",
        "iam:AttachRolePolicy",
        "iam:PassRole",
        "lambda:GetFunction",
        "lambda:ListFunctions",
        "lambda:CreateFunction",
        "lambda:DeleteFunction",
        "lambda:InvokeFunction",
        "lambda:GetFunctionConfiguration",
        "lambda:UpdateFunctionConfiguration",
        "lambda:UpdateFunctionCode",
        "lambda:CreateAlias",
        "lambda:UpdateAlias",
        "lambda:GetAlias",
        "lambda:ListAliases",
        "lambda:ListVersionsByFunction",
        "logs:FilterLogEvents",
        "cloudwatch:GetMetricStatistics"
      ],
      "Effect": "Allow",
      "Resource": "*"
    }
  ]
}
$ apex init

ref:
https://apex.run/#getting-started

After running apex init, Apex creates a Role and a Policy. You should be able to find them on AWS IAM Management Console. If you want to access other AWS resources, for instance, S3 buckets, DynamoDB tables, SNS, in your Lambda functions, you must create a new Policy which grants appropriate permissions and attachs itself to the Role that Apex created.

Here is a Policy example of operating certain DynamoDB tables:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt123456789",
            "Effect": "Allow",
            "Action": [
                "dynamodb:*"
            ],
            "Resource": [
                "arn:aws:dynamodb:ap-northeast-1:123456789:table/CodeTengu_Preference",
                "arn:aws:dynamodb:ap-northeast-1:123456789:table/CodeTengu_Preference/*",
                "arn:aws:dynamodb:ap-northeast-1:123456789:table/CodeTengu_WeeklyIssue",
                "arn:aws:dynamodb:ap-northeast-1:123456789:table/CodeTengu_WeeklyIssue/*",
                "arn:aws:dynamodb:ap-northeast-1:123456789:table/CodeTengu_WeeklyPost",
                "arn:aws:dynamodb:ap-northeast-1:123456789:table/CodeTengu_WeeklyPost/*"
            ]
        }
    ]
}

Write Lambda Functions

ref:
https://docs.aws.amazon.com/lambda/latest/dg/current-supported-versions.html
https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html

Node.js

The simplest handler:

const aws = require('aws-sdk');

exports.handle = (event, context, callback) => {
  doYourShit();
  callback(null, 'DONE');
};

ref:
https://docs.aws.amazon.com/lambda/latest/dg/programming-model.html

Call another Lambda function in a Lambda function:

You must make sure your Lambda role has the permission of invoking other Lambda functions.

const util = require('util');

const aws = require('aws-sdk');

const params = {
  FunctionName: 'LambdaBaku_syncIssue',
  InvocationType: 'Event', // means asynchronous execution
  Payload: JSON.stringify({ issue_number: curatedIssue.number }),
};

lambda.invoke(params, (err, data) => {
  if (err) {
    console.log('FAIL', params);
    console.log(util.inspect(err));
  } else {
    console.log(data);
  }
});

ref:
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Lambda.html
https://stackoverflow.com/questions/31714788/can-an-aws-lambda-function-call-another

Go

Write a Lambda function triggered by Amazon API Gateway:

package main

import (
    "encoding/json"
    "errors"
    "log"

    "github.com/aws/aws-lambda-go/events"
    "github.com/aws/aws-lambda-go/lambda"
    "github.com/vinta/pangu"
)

var (
    // ErrTextNotProvided is thrown when text is not provided in HTTP query string
    ErrTextNotProvided = errors.New("No text was provided in HTTP query string")
)

// Handler is the AWS Lambda function handler
func Handler(request events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) {
    log.Printf("request id: %s\n", request.RequestContext.RequestID)

    text, ok := request.QueryStringParameters["t"]
    if !ok {
        errMap := map[string]string{
            "message": ErrTextNotProvided.Error(),
        }
        errMapJSON, _ := json.MarshalIndent(errMap, "", " ")

        return events.APIGatewayProxyResponse{
            Body: string(errMapJSON),
            StatusCode: 400,
        }, nil
    }

    log.Printf("text: %s\n", text)

    textPlainHeaders := map[string]string{
        "content-type": "text/plain; charset=utf-8",
    }

    return events.APIGatewayProxyResponse{
        Body: pangu.SpacingText(text),
        Headers: textPlainHeaders,
        StatusCode: 200,
    }, nil
}

func main() {
    lambda.Start(Handler)
}

ref:
https://aws.amazon.com/blogs/compute/announcing-go-support-for-aws-lambda/
https://docs.aws.amazon.com/lambda/latest/dg/go-programming-model-handler-types.html
https://docs.aws.amazon.com/lambda/latest/dg/go-programming-model-errors.html

Your "Integration Request" configurations in API Gateway should be like:

  • Integration type: Lambda Function
  • Use Lambda Proxy integration: Yes
  • Lambda Region: ap-northeast-1
  • Lambda Function: panguspace_spacing_text
  • Invoke with caller credentials: No
  • Credentials cache: Do not add caller credentials to cache key
  • Use Default Timeout: Yes

It's also worth noting that the API response is mainly defined by APIGatewayProxyResponse in Lambda function code. Configurations in API Gateway, i.e., "Integration Response" and "Method Response" do not matter.

ref:
https://docs.aws.amazon.com/apigateway/latest/developerguide/getting-started-with-lambda-integration.html

Usage

Deploy all functions:

$ apex deploy

ref:
https://apex.run/#deploying-functions

Invoke a function:

# invoke a function directly
$ apex invoke spacing_text --logs
{
    "statusCode": 400,
    "headers": null,
    "body":"{\"message\": \"No text was provided in the HTTP query string\"}"
}

# invoke a function with an API Gateway event
$ cat fixtures/spacing_text_event.json
{
    "queryStringParameters": {"t": "與PM戰鬥的人,應當小心自己不要成為PM"}
}
$ apex invoke spacing_text --logs < fixtures/spacing_text_event.json
{
    "statusCode": 200,
    "headers": {"content-type": "text/plain; charset=utf-8"},
    "body": "與 PM 戰鬥的人,應當小心自己不要成為 PM"
}

ref:
https://apex.run/#invoking-functions

View logs which might delay several seconds:

$ apex logs -f

Pack a function:

$ apex build spacing_text > spacing_text.zip

Configure API Gateway

Create API Keys

To setup API keys, do the following:

  1. Configure your API methods to require an API key
  2. Deploy your API
  3. Create an API key for the API in a region
  4. Create an Usage Plan and assign an API key with a certain Stage

In step 1, your "Method Request" configurations in API Gateway should be like:

  • Authorization: NONE
  • Request Validator: NONE
  • API Key Required: true

Now you are able to call the API with a x-api-key header:

$ curl -H "x-api-key: YOUR-API-KEY" https://xxx.execute-api.ap-northeast-1.amazonaws.com/v1/your-endpoint/

ref:
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-create-usage-plans-with-rest-api.html
https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-use-postman-to-call-api.html

Actually, you could release your APIs without API keys if you like.

Setup a Custom Domain

To setup a custom domain which managed by Cloudflare, see the following link:
https://stackoverflow.com/a/46061708/885524

It is worth noting that even the Stack Overflow answer said using Full (Strict) SSL mode but actually Full also works.

Moreover, it might take a long time to generate "Target Domain Name" (xxx.cloudfront.net).

Don't forget to add "Base Path Mappings" in API Gateway Custom Domain Names:

  • api.pangu.space
    • Target Domain Name: xxx.cloudfront.net
    • ACM Certificate: *.pangu.space
    • Base Path Mappings:
      • Path: /v1
      • Destination: Pangu:v1

Manage Infrastructures with Terraform

Terraform is a tool to manage your cloud infrastructures as code.

$ brew install terraform

$ tree .
.
├── functions
│   ├── introduce
│   │   └── main.go
│   └── spacing_text
│       └── main.go
└── infrastructure
    ├── main.tf
    └── variables.tf

Define variables and data sources:

# infrastructure/variables.tf
data "aws_caller_identity" "current" {}

variable "aws_region" {}
variable "apex_environment" {}
variable "apex_function_role" {}

variable "apex_function_arns" {
  type = "map"
}

variable "apex_function_names" {
  type = "map"
}

variable "apex_function_introduce" {}
variable "apex_function_spacing_text" {}

ref:
https://www.terraform.io/docs/providers/aws/d/caller_identity.html

Define AWS resources:

# infrastructure/main.tf
resource "aws_api_gateway_rest_api" "pangu" {
  name = "Pangu"
}

resource "aws_api_gateway_method" "pangu_root" {
  rest_api_id   = "${aws_api_gateway_rest_api.pangu.id}"
  resource_id   = "${aws_api_gateway_rest_api.pangu.root_resource_id}"
  http_method   = "GET"
  authorization = "NONE"
}

resource "aws_api_gateway_integration" "pangu_root_get" {
  rest_api_id             = "${aws_api_gateway_rest_api.pangu.id}"
  resource_id             = "${aws_api_gateway_rest_api.pangu.root_resource_id}"
  http_method             = "${aws_api_gateway_method.pangu_root.http_method}"
  integration_http_method = "POST"
  type                    = "AWS_PROXY"
  uri                     = "arn:aws:apigateway:${var.aws_region}:lambda:path/2015-03-31/functions/${var.apex_function_introduce}/invocations"
}

resource "aws_api_gateway_method_response" "pangu_root_get_200" {
  rest_api_id = "${aws_api_gateway_rest_api.pangu.id}"
  resource_id = "${aws_api_gateway_rest_api.pangu.root_resource_id}"
  http_method = "${aws_api_gateway_method.pangu_root.http_method}"
  status_code = "200"

  response_models = {
    "application/json" = "Empty"
  }

  response_parameters = {
    "method.response.header.Access-Control-Allow-Origin" = true
  }
}

resource "aws_api_gateway_resource" "pangu_spacing_text" {
  rest_api_id = "${aws_api_gateway_rest_api.pangu.id}"
  parent_id   = "${aws_api_gateway_rest_api.pangu.root_resource_id}"
  path_part   = "spacing-text"
}

resource "aws_api_gateway_method" "pangu_spacing_text_get" {
  rest_api_id      = "${aws_api_gateway_rest_api.pangu.id}"
  resource_id      = "${aws_api_gateway_resource.pangu_spacing_text.id}"
  http_method      = "GET"
  authorization    = "NONE"
  api_key_required = true
}

resource "aws_api_gateway_integration" "pangu_spacing_text_get" {
  rest_api_id             = "${aws_api_gateway_rest_api.pangu.id}"
  resource_id             = "${aws_api_gateway_resource.pangu_spacing_text.id}"
  http_method             = "${aws_api_gateway_method.pangu_spacing_text_get.http_method}"
  integration_http_method = "POST"
  type                    = "AWS_PROXY"
  uri                     = "arn:aws:apigateway:${var.aws_region}:lambda:path/2015-03-31/functions/${var.apex_function_spacing_text}/invocations"
}

resource "aws_api_gateway_method_response" "pangu_spacing_text_get_200" {
  rest_api_id = "${aws_api_gateway_rest_api.pangu.id}"
  resource_id = "${aws_api_gateway_resource.pangu_spacing_text.id}"
  http_method = "${aws_api_gateway_method.pangu_spacing_text_get.http_method}"
  status_code = "200"

  response_models = {
    "application/json" = "Empty"
  }

  response_parameters = {
    "method.response.header.Access-Control-Allow-Origin" = true
  }
}

resource "aws_api_gateway_deployment" "pangu" {
  depends_on = [
    "aws_api_gateway_method.pangu_root",
    "aws_api_gateway_integration.pangu_root_get",
    "aws_api_gateway_method_response.pangu_root_get_200",
    "aws_api_gateway_resource.pangu_spacing_text",
    "aws_api_gateway_method.pangu_spacing_text_get",
    "aws_api_gateway_integration.pangu_spacing_text_get",
    "aws_api_gateway_method_response.pangu_spacing_text_get_200",
  ]

  rest_api_id = "${aws_api_gateway_rest_api.pangu.id}"
  stage_name  = "v1"
}

resource "aws_lambda_permission" "pangu_root_get" {
  statement_id  = "AllowInvokeFromAPIGateway"
  action        = "lambda:InvokeFunction"
  function_name = "${var.apex_function_introduce}"
  principal     = "apigateway.amazonaws.com"

  source_arn = "arn:aws:execute-api:${var.aws_region}:${data.aws_caller_identity.current.account_id}:${aws_api_gateway_rest_api.pangu.id}/*/${aws_api_gateway_integration.pangu_root_get.http_method}/"
}

resource "aws_lambda_permission" "pangu_spacing_text" {
  statement_id  = "AllowInvokeFromAPIGateway"
  action        = "lambda:InvokeFunction"
  function_name = "${var.apex_function_spacing_text}"
  principal     = "apigateway.amazonaws.com"

  source_arn = "arn:aws:execute-api:${var.aws_region}:${data.aws_caller_identity.current.account_id}:${aws_api_gateway_rest_api.pangu.id}/*/${aws_api_gateway_integration.pangu_spacing_text_get.http_method}${aws_api_gateway_resource.pangu_spacing_text.path}"
}

ref:
https://www.terraform.io/docs/providers/aws/guides/serverless-with-aws-lambda-and-api-gateway.html

# donwload provider plugins
$ apex infra init

# view the generated execution plan
$ apex infra plan

# deploy your infrastructures
$ apex infra apply
$ apex infra apply -auto-approve

ref:
https://apex.run/#managing-infrastructure

Upload files to Amazon S3 when Travis CI builds pass

Upload files to Amazon S3 when Travis CI builds pass

Assume that you want to upload a xxx.whl file generated by pip wheel to Amazon S3 so that you will be able to run pip install https://url/to/s3/bucket/xxx.whl.

CAUTION! By default, only master branch's builds could trigger deployments in Travis CI.

Configuration

before_install:
  - pip install -U pip
  - pip install wheel

script:
  - python setup.py test

before_deploy:
  - pip wheel --wheel-dir=wheelhouse .

deploy:
  provider: s3
  access_key_id: "YOUR_KEY"
  secret_access_key: "YOUR_SECRET"
  bucket: YOUR_BUCKET
  acl: public_read
  local_dir: wheelhouse
  upload_dir: wheels
  skip_cleanup: true
# install from an URL directly
$ pip install https://url/to/s3/bucket/wheels/xxx.whl

ref:
https://docs.travis-ci.com/user/deployment/s3

Setup a static website on Amazon S3

Setup a static website on Amazon S3

Say that you would like to host your static site on Amazon S3 with a custom domain and, of course, HTTPS.

Create two S3 buckets

To serve requests from both root domain such as codetengu.com and subdomain such as www.codetengu.com, you must create two buckets named exactly codetengu.com and www.codetengu.com.

In this post, I assume that you want to redirect www.codetengu.com to codetengu.com.

ref:
https://docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-custom-domain-walkthrough.html

Upload your static files

$ cd /path/to/your_project_root/

$ aws s3 sync . s3://codetengu.com \
--acl "public-read" \
--exclude "*.DS_Store" \
--exclude "*.gitignore" \
--exclude ".git/*" \
--dryrun

$ aws s3 website s3://codetengu.com --index-document index.html --error-document error.html

ref:
https://docs.aws.amazon.com/cli/latest/reference/s3/sync.html

Setup bucket policy for public accessing

In your S3 Management Console, click codetengu.com bucket > Properties > Edit bucket policy, enter:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AddPerm",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::codetengu.com/*"
        }
    ]
}

Setup www redirecting

In your S3 Management Console, click www.codetengu.com bucket > Properties > Static Website Hosting, choose Redirect all requests to another host name, type codetengu.com.

Now you're able to access your website via:

Configure a custom domain

In the "Setting Up a Static Website Using a Custom Domain" guide I mentioned above, it uses Amazon Route 53 to manage DNS records; In this post, I use CloudFlare as my website's DNS provider instead.

  • Create a CNAME for codetengu.com to point to codetengu.com.s3-website-ap-northeast-1.amazonaws.com
  • Create a CNAME for www.codetengu.com to point to codetengu.com.s3-website-ap-northeast-1.amazonaws.com

Yep, you CAN create a CNAME record for root domain on CloudFlare, just like your can add an "Alias" on Route 53.

Wait for the DNS records to propagate then visit https://codetengu.com/.