IPFS: The (Very Slow) Distributed Permanent Web

IPFS: The (Very Slow) Distributed Permanent Web

IPFS stands for InterPlanetary File System, but you could simply consider it as a distributed, permanent, but ridiculously slow, not properly functioning version of web. You could upload any static file and static website to IPFS. And the whole swarm would probably distribute your files to the moon, that might be why IPFS is so fucking slow.

ref:
https://ipfs.io/

Installation

Install on macOS.

$ brew install ipfs

Start your IPFS node.

$ ipfs init
initializing IPFS node at /Users/vinta/.ipfs
generating 2048-bit RSA keypair... done
peer identity: QmfNy1th16zscbpxe8Q2EQdQkNFn7Y3Rp9kGZWL1EQDyw6

$ ipfs daemon

ref:
https://ipfs.io/docs/commands/#ipfs-init
https://ipfs.io/docs/commands/#ipfs-daemon

Furthermore, you might want to run your IPFS node in a Docker container.

# docker-compose.yml
version: "3"
services:
    ipfs:
        image: ipfs/go-ipfs:v0.4.15
        working_dir: /export
        ports:
            - "4001:4001" # Swarm
            - "5001:5001" # web UI
            - "8080:8080" # HTTP proxy
        volumes:
            - "~/.ipfs:/data/ipfs"
            - "~/.ipfs/export:/export"

ref:
https://hub.docker.com/r/ipfs/go-ipfs/

Usage

Show Node Info

$ ipfs id
{
    "ID": "QmfNy1th16zscbpxe8Q2EQdQkNFn7Y3Rp9kGZWL1EQDyw6",
    "PublicKey": "A_LONG_LONG_LONG_KEY,
    "Addresses": [
        "/ip4/127.0.0.1/tcp/4001/ipfs/QmfNy1th16zscbpxe8Q2EQdQkNFn7Y3Rp9kGZWL1EQDyw6",
        "/ip4/172.19.0.2/tcp/4001/ipfs/QmfNy1th16zscbpxe8Q2EQdQkNFn7Y3Rp9kGZWL1EQDyw6"
    ],
    "AgentVersion": "go-ipfs/0.4.14/5db3846",
    "ProtocolVersion": "ipfs/0.1.0"
}

ref:
https://ipfs.io/docs/getting-started/

Add Other Nodes to Your Bootstrap List

This one is from Muzeum, https://muzeum.pro/.

$ ipfs bootstrap add /ip4/52.221.121.238/tcp/4001/ipfs/QmTKYdZDkqHiY24kPynSmKbmRdk7cJxWsvvfvvvZArQ1N9

# you could also connect to a node directly
$ ipfs swarm connect /ip4/52.221.121.238/tcp/4001/ipfs/QmTKYdZDkqHiY24kPynSmKbmRdk7cJxWsvvfvvvZArQ1N9

ref:
https://ipfs.io/docs/commands/#ipfs-bootstrap
https://ipfs.io/docs/commands/#ipfs-swarm

Add Files to IPFS

Every IPFS node's default storage is 10GB, and a single node could only store data it needs, which also means each node only stores a small amount of whole data on IPFS. If there is not enough nodes, your data might be distributed to no one except your own node.

Your content is automatically pinned when you ipfs add it.

$ ipfs add -r mysite
added QmRticJ3P5fnb9GGnUj3U9XMkYvGEnv9AQfk6YmgRhivYA mysite/index.html
added QmY9cxiHqTFoWamkQVkpmmqzBrY3hCBEL2XNu3NtX74Fuu mysite/readme.md
added QmTLhFgeWLacpbiGNYmhchHGQAhfNyDZcLt5akJFFLV89V mysite

If files/folders under the folder change, the hash of the folder changes too.

$ vim mysite/index.html
$ ipfs add -r mysite
added QmQTTe3deLfeULKjPHnQTcyFuCmY5JZiwSTiPT4nSt1KVK mysite/index.html # changed
added QmS85tb3aKQNurFm51FaxtK6NyNei4ej3gDR21baDZXRoU mysite            # changed

ref:
https://ipfs.io/docs/commands/#ipfs-add

Pin Files from IPFS

Pinning means storing IPFS files on local node, and prevent them from getting garbage collected. Also, you could access them much quickly. You only need to do ipfs pin add to pin contents someone else uploaded.

$ ipfs pin add -r --progress /ipns/ipfs.soundscape.net/

$ ipfs pin add --progress /ipns/ipfs.soundscape.net/music_group/index.json
pinned QmZwTEhdjT4MyvEnWndVEJzBjp8zGGZH1cEBpshBQs75rY recursively

$ ipfs pin add --progress /ipns/ipfs.soundscape.net/music_album/index.json
pinned QmSAuGU5xt5SdR2ca2EDgeHFATSrAQhTfTYpYs9K9qmqED recursively

$ ipfs pin add --progress /ipns/ipfs.soundscape.net/music_recording/index.json
pinned QmcTiadA9jRMXx77tydPa6492QJAtjXkKkA4gERaFksy94 recursively

$ ipfs pin add --progress /ipns/ipfs.soundscape.net/music_composition/index.json
pinned QmTfqVaGVRnaPRQgYypGYXUvTK1UcDfK5VWYvU4rwK3m26 recursively

P.S. Sometimes when I ipfs pin add a file which is not on my node, the command just hangs there. I'm not sure why that once I access the file first (through curl or any browser), then ipfs pin add works fine. But it does not make sense: if I already get/access/download the file, I could just ipfs add the file and it would be automatically pinned.

ref:
https://ipfs.io/docs/commands/#ipfs-pin

Get Files

You have several ways to get files or folders from IPFS:

  • ipfs get dir-hash -o readable-dir-name
    • ipfs get QmbMQNcg8TTo5dXZPtuxbns1XVq6cZJaa7vNqZzeJpKwfk -o mysite
  • ipfs get file-hash -o readable-file-name.ext
    • ipfs get Qmd286K6pohQcTKYqnS1YhWrCiS4gz7Xi34sdwMe9USZ7u -o cat.jpg
  • ipfs get /ipfs/dir-hash/path/to/file.txt
    • ipfs get /ipfs/QmYwAPJzv5CZsnA625s3Xf2nemtYgPpHdWEz79ojWnPbdG/readme
  • ipfs get /ipns/example.com/path/to/file.txt
    • ipfs get /ipns/ipfs.soundscape.net/music_group/index.json

You could also access IPFS files through any public gateway:

  • curl https://ipfs.io/ipns/peer-id/path/to/file.txt
    • curl https://ipfs.io/ipns/QmfNy1th16zscbpxe8Q2EQdQkNFn7Y3Rp9kGZWL1EQDyw6/index.html
  • curl https://ipfs.io/ipns/example.com/path/to/file.txt
    • curl https://ipfs.io/ipns/ipfs.soundscape.net/music_group/index.json
  • curl http://127.0.0.1:8080/ipns/example.com/path/to/file.txt
    • curl http://127.0.0.1:8080/ipns/ipfs.soundscape.net/music_group/index.json

Download IPFS objects with ipfs get.

$ ipfs ls QmW2WQi7j6c7UgJTarActp7tDNikE4B2qXtFCfLPdsgaTQ
Qmd286K6pohQcTKYqnS1YhWrCiS4gz7Xi34sdwMe9USZ7u 443362 cat.jpg

# you could get a folder
$ ipfs get QmW2WQi7j6c7UgJTarActp7tDNikE4B2qXtFCfLPdsgaTQ
$ ls QmW2WQi7j6c7UgJTarActp7tDNikE4B2qXtFCfLPdsgaTQ
cat.jpg

# as well as a file
$ ipfs get Qmd286K6pohQcTKYqnS1YhWrCiS4gz7Xi34sdwMe9USZ7u -o cat.jpg

# get files and rename them
$ mkdir -p soundscape/music_group/ soundscape/music_album/ soundscape/music_recording/ soundscape/music_composition/ && \
  ipfs get /ipns/ipfs.soundscape.net/music_group/index.json -o soundscape/music_group/index.json; \
  ipfs get /ipns/ipfs.soundscape.net/music_album/index.json -o soundscape/music_album/index.json; \
  ipfs get /ipns/ipfs.soundscape.net/music_recording/index.json -o soundscape/music_recording/index.json; \
  ipfs get /ipns/ipfs.soundscape.net/music_composition/index.json -o soundscape/music_composition/index.json

# get whole folders
$ ipfs get /ipns/ipfs.soundscape.net/music_group; \
  ipfs get /ipns/ipfs.soundscape.net/music_album; \
  ipfs get /ipns/ipfs.soundscape.net/music_recording; \
  ipfs get /ipns/ipfs.soundscape.net/music_composition

ref:
https://ipfs.io/docs/commands/#ipfs-get
https://discuss.ipfs.io/t/trying-to-better-understand-the-pinning-concept/754

Display IPFS object data with ipfs cat.

$ ipfs cat Qmd286K6pohQcTKYqnS1YhWrCiS4gz7Xi34sdwMe9USZ7u > cat.jpg
$ ipfs cat QmS4ustL54uo8FzR9455qaxZwuMiUhyvMcX9Ba8nUH4uVv/readme

Publish a Website to IPNS

IPNS stands for InterPlanetary Naming System.

Everytime you change files under a folder, the hash of the folder also changes. So you need a static reference which always points to the latest hash of your folder. You could publish your static website (a folder) to IPNS with the static reference, which is your peer ID as well as the hash of your public key.

By default, every IPFS node has only one pair of private and public key. Therefore, you could only publish one folder with your peer ID. But you could add new keypairs through ipfs key gen and publish multiple folders.

$ ipfs add -r mysite
added QmeqHWZgvgx5C7T6DakX75CJDRgAUoSDZayLYrcnAP8Fma mysite/index.html
added QmUtuRphD9rJgRkfxwj7DcyFEAcSeH3Q1fK8nHxxoDiKK5 mysite

$ ipfs name publish QmUtuRphD9rJgRkfxwj7DcyFEAcSeH3Q1fK8nHxxoDiKK5
published to QmfNy1th16zscbpxe8Q2EQdQkNFn7Y3Rp9kGZWL1EQDyw6: /ipfs/QmUtuRphD9rJgRkfxwj7DcyFEAcSeH3Q1fK8nHxxoDiKK5

$ ipfs name resolve QmfNy1th16zscbpxe8Q2EQdQkNFn7Y3Rp9kGZWL1EQDyw6
/ipfs/QmUtuRphD9rJgRkfxwj7DcyFEAcSeH3Q1fK8nHxxoDiKK5

Click following links to see contents.

After you change something, publish it again with new hash.

$ vim mysite/index.html
$ ipfs add -r mysite
added QmNjbhdks8RUgDt6QiNFe5QGe2HrbCsq5FKda9D9hLVkkU mysite/index.html # changed
added QmbMQNcg8TTo5dXZPtuxbns1XVq6cZJaa7vNqZzeJpKwfk mysite            # changed

$ ipfs name publish QmbMQNcg8TTo5dXZPtuxbns1XVq6cZJaa7vNqZzeJpKwfk
published to QmfNy1th16zscbpxe8Q2EQdQkNFn7Y3Rp9kGZWL1EQDyw6: /ipfs/QmbMQNcg8TTo5dXZPtuxbns1XVq6cZJaa7vNqZzeJpKwfk

ref:
https://ipfs.io/docs/commands/#ipfs-name

Create a Domain Name Alias for Your Peer ID

The hash is not very friendly for humans. Fortunately, you could and probably should associate a domain name with your peer ID.

First, you need to add a TXT record whose value is dnslink=/ipns/YOUR_PEER_ID to your domain name. In the following article, we assume the domain name you choose is ipfs.kittenphile.com.

$ dig +short TXT ipfs.kittenphile.com
"dnslink=/ipns/QmfNy1th16zscbpxe8Q2EQdQkNFn7Y3Rp9kGZWL1EQDyw6"

$ ipfs name resolve -r ipfs.kittenphile.com
/ipfs/QmaE2DcNxGjPGPfzfTQuTBTW9D57abVSv319WqC89Av1y1

Click following links to see contents.

ref:
https://ipfs.io/docs/examples/example-viewer/example#../websites/README.md
https://hackernoon.com/ten-terrible-attempts-to-make-the-inter-planetary-file-system-human-friendly-e4e95df0c6fa

Public Gateway

If you have a public gateway and people retrieve files through it. Your public gateway fetches and stores the data, but it doesn't pin them. Files get removed with the next garbage collection run.

ref:
https://discuss.ipfs.io/t/public-facing-gateway-and-pinning/449

Pipenv and Pipfile: The officially recommended Python packaging tool

Pipenv and Pipfile: The officially recommended Python packaging tool

You no longer need to use pip and virtualenv separately. Use pipenv instead.

ref:
https://github.com/pypa/pipenv
https://pipenv.kennethreitz.org/en/latest/

Install

$ pip install pipenv

ref:
https://pipenv.kennethreitz.org/en/latest/install/#installing-pipenv

Usage

$ pyenv global 3.7.4

# initialize project virtualenv with a specific Python version
# automatically generate both Pipfile and Pipfile.lock from requirements.txt if it exists
$ pipenv --three

$ cd /path/to/project-contains-Pipfile
$ pipenv install

$ pipenv install pangu
$ pipenv install -r requirements.txt

# install packages to dev-packages
$ pipenv install --dev \
autopep8 \
flake8 \
flake8-bandit \
flake8-blind-except \
flake8-bugbear \
flake8-builtins \
flake8-comprehensions \
flake8-debugger \
flake8-mutable \
flake8-pep3101 \
flake8-print \
flake8-string-format \
ipdb \
jedi \
mypy \
pep8-naming \
ptvsd \
pylint \
pylint-celery \
pylint-common \
pylint-flask \
pytest \
watchdog

# switch your shell environment to project virtualenv
$ pipenv shell
$ exit

# uninstall everything
$ pipenv uninstall --all

# remove project virtualenv
$ pipenv --rm

ref:
https://pipenv.kennethreitz.org/en/latest/install/

Example Pipfile

[[source]]
url = "https://pypi.python.org/simple" 
verify_ssl = true 
name = "pypi" 

[requires] 
python_version = "3.7"

[packages] 
celery = "==4.2.1"
flask = "==1.0.2"
requests = ">=2.0.0" 

[dev-packages] 
flake8 = "*" 
ipdb = "*" 
pylint = "*" 

[scripts]
web = "python -m flask run -h 0.0.0.0"
worker = "celery -A app:celery worker --pid= -l info -E --purge"
scheduler = "celery -A app:celery beat -l info --pid="
shell = "flask shell"

ref:
https://pipenv.kennethreitz.org/en/latest/basics/#example-pipfile-pipfile-lock

Print traceback call stack in Python

Print traceback call stack in Python

Extract Traceback From An Exception

try:
    do_shit()
except Exception as exc:
    print('----- start -----')
    tb = _hx_e.__traceback__
    raise RuntimeError().with_traceback(tb)
    print('----- end -----')

reef:
https://stackoverflow.com/questions/11414894/extract-traceback-info-from-an-exception-object

Print Traceback Without Raising An Exception

print('----- start -----')
import traceback; traceback.print_stack()
print('----- end -----')

# or

import traceback
for line in traceback.format_stack():
    print(line.strip())

The result would be like:

>>> import qingcloud.iaas
>>> conn = qingcloud.iaas.connect_to_zone('pek2', '123', '456')
>>> conn.describe_instances(limit=1)
----- start -----
  File "<stdin>", line 1, in <module>
  File "qingcloud/iaas/connection.py", line 214, in describe_instances
    return self.send_request(action, body)
  File "qingcloud/iaas/connection.py", line 42, in send_request
    resp = self.send(url, request, verb)
  File "qingcloud/conn/connection.py", line 245, in send
    request.authorize(self)
  File "qingcloud/conn/connection.py", line 156, in authorize
    connection._auth_handler.add_auth(self, **kwargs)
  File "qingcloud/conn/auth.py", line 118, in add_auth
    import traceback; traceback.print_stack()
----- end -----

ref:
https://stackoverflow.com/questions/3925248/print-python-stack-trace-without-exception-being-raised

Apex and Terraform: The easiest way to manage AWS Lambda functions

Apex and Terraform: The easiest way to manage AWS Lambda functions

AWS Lambda lets you run code without provisioning or managing servers, which is so-called Serverless or Function as a Service (FaaS).

Apex is a Go command-line tool to manage and deploy your serverless functions on AWS Lambda. Apex is also integrated with Terraform to provide cloud infrastructure management, for instance, configuring your AWS Lambda functions with Amazon API Gateway.

ref:
https://aws.amazon.com/lambda/
https://aws.amazon.com/api-gateway/
https://github.com/apex/apex

You could browse projects created in this post on GitHub:
https://github.com/vinta/pangu.space
https://github.com/CodeTengu/LambdaBaku

Install

$ curl https://raw.githubusercontent.com/apex/apex/master/install.sh | sh

ref:
https://apex.run/#installation

Initialize

It is recommended to configure your AWS credentials with awscli.

$ pip install awscli
$ aws configure

ref:
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html

To use Apex to manage Lambda functions, you have to make sure your AWS credential has minimum IAM permissions:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "iam:CreateRole",
        "iam:CreatePolicy",
        "iam:AttachRolePolicy",
        "iam:PassRole",
        "lambda:GetFunction",
        "lambda:ListFunctions",
        "lambda:CreateFunction",
        "lambda:DeleteFunction",
        "lambda:InvokeFunction",
        "lambda:GetFunctionConfiguration",
        "lambda:UpdateFunctionConfiguration",
        "lambda:UpdateFunctionCode",
        "lambda:CreateAlias",
        "lambda:UpdateAlias",
        "lambda:GetAlias",
        "lambda:ListAliases",
        "lambda:ListVersionsByFunction",
        "logs:FilterLogEvents",
        "cloudwatch:GetMetricStatistics"
      ],
      "Effect": "Allow",
      "Resource": "*"
    }
  ]
}
$ apex init

ref:
https://apex.run/#getting-started

After running apex init, Apex creates a Role and a Policy. You should be able to find them on AWS IAM Management Console. If you want to access other AWS resources, for instance, S3 buckets, DynamoDB tables, SNS, in your Lambda functions, you must create a new Policy which grants appropriate permissions and attachs itself to the Role that Apex created.

Here is a Policy example of operating certain DynamoDB tables:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt123456789",
            "Effect": "Allow",
            "Action": [
                "dynamodb:*"
            ],
            "Resource": [
                "arn:aws:dynamodb:ap-northeast-1:123456789:table/CodeTengu_Preference",
                "arn:aws:dynamodb:ap-northeast-1:123456789:table/CodeTengu_Preference/*",
                "arn:aws:dynamodb:ap-northeast-1:123456789:table/CodeTengu_WeeklyIssue",
                "arn:aws:dynamodb:ap-northeast-1:123456789:table/CodeTengu_WeeklyIssue/*",
                "arn:aws:dynamodb:ap-northeast-1:123456789:table/CodeTengu_WeeklyPost",
                "arn:aws:dynamodb:ap-northeast-1:123456789:table/CodeTengu_WeeklyPost/*"
            ]
        }
    ]
}

Write Lambda Functions

ref:
https://docs.aws.amazon.com/lambda/latest/dg/current-supported-versions.html
https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html

Node.js

The simplest handler:

const aws = require('aws-sdk');

exports.handle = (event, context, callback) => {
  doYourShit();
  callback(null, 'DONE');
};

ref:
https://docs.aws.amazon.com/lambda/latest/dg/programming-model.html

Call another Lambda function in a Lambda function:

You must make sure your Lambda role has the permission of invoking other Lambda functions.

const util = require('util');

const aws = require('aws-sdk');

const params = {
  FunctionName: 'LambdaBaku_syncIssue',
  InvocationType: 'Event', // means asynchronous execution
  Payload: JSON.stringify({ issue_number: curatedIssue.number }),
};

lambda.invoke(params, (err, data) => {
  if (err) {
    console.log('FAIL', params);
    console.log(util.inspect(err));
  } else {
    console.log(data);
  }
});

ref:
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Lambda.html
https://stackoverflow.com/questions/31714788/can-an-aws-lambda-function-call-another

Go

Write a Lambda function triggered by Amazon API Gateway:

package main

import (
    "encoding/json"
    "errors"
    "log"

    "github.com/aws/aws-lambda-go/events"
    "github.com/aws/aws-lambda-go/lambda"
    "github.com/vinta/pangu"
)

var (
    // ErrTextNotProvided is thrown when text is not provided in HTTP query string
    ErrTextNotProvided = errors.New("No text was provided in HTTP query string")
)

// Handler is the AWS Lambda function handler
func Handler(request events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) {
    log.Printf("request id: %s\n", request.RequestContext.RequestID)

    text, ok := request.QueryStringParameters["t"]
    if !ok {
        errMap := map[string]string{
            "message": ErrTextNotProvided.Error(),
        }
        errMapJSON, _ := json.MarshalIndent(errMap, "", " ")

        return events.APIGatewayProxyResponse{
            Body: string(errMapJSON),
            StatusCode: 400,
        }, nil
    }

    log.Printf("text: %s\n", text)

    textPlainHeaders := map[string]string{
        "content-type": "text/plain; charset=utf-8",
    }

    return events.APIGatewayProxyResponse{
        Body: pangu.SpacingText(text),
        Headers: textPlainHeaders,
        StatusCode: 200,
    }, nil
}

func main() {
    lambda.Start(Handler)
}

ref:
https://aws.amazon.com/blogs/compute/announcing-go-support-for-aws-lambda/
https://docs.aws.amazon.com/lambda/latest/dg/go-programming-model-handler-types.html
https://docs.aws.amazon.com/lambda/latest/dg/go-programming-model-errors.html

Your "Integration Request" configurations in API Gateway should be like:

  • Integration type: Lambda Function
  • Use Lambda Proxy integration: Yes
  • Lambda Region: ap-northeast-1
  • Lambda Function: panguspace_spacing_text
  • Invoke with caller credentials: No
  • Credentials cache: Do not add caller credentials to cache key
  • Use Default Timeout: Yes

It's also worth noting that the API response is mainly defined by APIGatewayProxyResponse in Lambda function code. Configurations in API Gateway, i.e., "Integration Response" and "Method Response" do not matter.

ref:
https://docs.aws.amazon.com/apigateway/latest/developerguide/getting-started-with-lambda-integration.html

Usage

Deploy all functions:

$ apex deploy

ref:
https://apex.run/#deploying-functions

Invoke a function:

# invoke a function directly
$ apex invoke spacing_text --logs
{
    "statusCode": 400,
    "headers": null,
    "body":"{\"message\": \"No text was provided in the HTTP query string\"}"
}

# invoke a function with an API Gateway event
$ cat fixtures/spacing_text_event.json
{
    "queryStringParameters": {"t": "與PM戰鬥的人,應當小心自己不要成為PM"}
}
$ apex invoke spacing_text --logs < fixtures/spacing_text_event.json
{
    "statusCode": 200,
    "headers": {"content-type": "text/plain; charset=utf-8"},
    "body": "與 PM 戰鬥的人,應當小心自己不要成為 PM"
}

ref:
https://apex.run/#invoking-functions

View logs which might delay several seconds:

$ apex logs -f

Pack a function:

$ apex build spacing_text > spacing_text.zip

Configure API Gateway

Create API Keys

To setup API keys, do the following:

  1. Configure your API methods to require an API key
  2. Deploy your API
  3. Create an API key for the API in a region
  4. Create an Usage Plan and assign an API key with a certain Stage

In step 1, your "Method Request" configurations in API Gateway should be like:

  • Authorization: NONE
  • Request Validator: NONE
  • API Key Required: true

Now you are able to call the API with a x-api-key header:

$ curl -H "x-api-key: YOUR-API-KEY" https://xxx.execute-api.ap-northeast-1.amazonaws.com/v1/your-endpoint/

ref:
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-create-usage-plans-with-rest-api.html
https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-use-postman-to-call-api.html

Actually, you could release your APIs without API keys if you like.

Setup a Custom Domain

To setup a custom domain which managed by Cloudflare, see the following link:
https://stackoverflow.com/a/46061708/885524

It is worth noting that even the Stack Overflow answer said using Full (Strict) SSL mode but actually Full also works.

Moreover, it might take a long time to generate "Target Domain Name" (xxx.cloudfront.net).

Don't forget to add "Base Path Mappings" in API Gateway Custom Domain Names:

  • api.pangu.space
    • Target Domain Name: xxx.cloudfront.net
    • ACM Certificate: *.pangu.space
    • Base Path Mappings:
      • Path: /v1
      • Destination: Pangu:v1

Manage Infrastructures with Terraform

Terraform is a tool to manage your cloud infrastructures as code.

$ brew install terraform

$ tree .
.
├── functions
│   ├── introduce
│   │   └── main.go
│   └── spacing_text
│       └── main.go
└── infrastructure
    ├── main.tf
    └── variables.tf

Define variables and data sources:

# infrastructure/variables.tf
data "aws_caller_identity" "current" {}

variable "aws_region" {}
variable "apex_environment" {}
variable "apex_function_role" {}

variable "apex_function_arns" {
  type = "map"
}

variable "apex_function_names" {
  type = "map"
}

variable "apex_function_introduce" {}
variable "apex_function_spacing_text" {}

ref:
https://www.terraform.io/docs/providers/aws/d/caller_identity.html

Define AWS resources:

# infrastructure/main.tf
resource "aws_api_gateway_rest_api" "pangu" {
  name = "Pangu"
}

resource "aws_api_gateway_method" "pangu_root" {
  rest_api_id   = "${aws_api_gateway_rest_api.pangu.id}"
  resource_id   = "${aws_api_gateway_rest_api.pangu.root_resource_id}"
  http_method   = "GET"
  authorization = "NONE"
}

resource "aws_api_gateway_integration" "pangu_root_get" {
  rest_api_id             = "${aws_api_gateway_rest_api.pangu.id}"
  resource_id             = "${aws_api_gateway_rest_api.pangu.root_resource_id}"
  http_method             = "${aws_api_gateway_method.pangu_root.http_method}"
  integration_http_method = "POST"
  type                    = "AWS_PROXY"
  uri                     = "arn:aws:apigateway:${var.aws_region}:lambda:path/2015-03-31/functions/${var.apex_function_introduce}/invocations"
}

resource "aws_api_gateway_method_response" "pangu_root_get_200" {
  rest_api_id = "${aws_api_gateway_rest_api.pangu.id}"
  resource_id = "${aws_api_gateway_rest_api.pangu.root_resource_id}"
  http_method = "${aws_api_gateway_method.pangu_root.http_method}"
  status_code = "200"

  response_models = {
    "application/json" = "Empty"
  }

  response_parameters = {
    "method.response.header.Access-Control-Allow-Origin" = true
  }
}

resource "aws_api_gateway_resource" "pangu_spacing_text" {
  rest_api_id = "${aws_api_gateway_rest_api.pangu.id}"
  parent_id   = "${aws_api_gateway_rest_api.pangu.root_resource_id}"
  path_part   = "spacing-text"
}

resource "aws_api_gateway_method" "pangu_spacing_text_get" {
  rest_api_id      = "${aws_api_gateway_rest_api.pangu.id}"
  resource_id      = "${aws_api_gateway_resource.pangu_spacing_text.id}"
  http_method      = "GET"
  authorization    = "NONE"
  api_key_required = true
}

resource "aws_api_gateway_integration" "pangu_spacing_text_get" {
  rest_api_id             = "${aws_api_gateway_rest_api.pangu.id}"
  resource_id             = "${aws_api_gateway_resource.pangu_spacing_text.id}"
  http_method             = "${aws_api_gateway_method.pangu_spacing_text_get.http_method}"
  integration_http_method = "POST"
  type                    = "AWS_PROXY"
  uri                     = "arn:aws:apigateway:${var.aws_region}:lambda:path/2015-03-31/functions/${var.apex_function_spacing_text}/invocations"
}

resource "aws_api_gateway_method_response" "pangu_spacing_text_get_200" {
  rest_api_id = "${aws_api_gateway_rest_api.pangu.id}"
  resource_id = "${aws_api_gateway_resource.pangu_spacing_text.id}"
  http_method = "${aws_api_gateway_method.pangu_spacing_text_get.http_method}"
  status_code = "200"

  response_models = {
    "application/json" = "Empty"
  }

  response_parameters = {
    "method.response.header.Access-Control-Allow-Origin" = true
  }
}

resource "aws_api_gateway_deployment" "pangu" {
  depends_on = [
    "aws_api_gateway_method.pangu_root",
    "aws_api_gateway_integration.pangu_root_get",
    "aws_api_gateway_method_response.pangu_root_get_200",
    "aws_api_gateway_resource.pangu_spacing_text",
    "aws_api_gateway_method.pangu_spacing_text_get",
    "aws_api_gateway_integration.pangu_spacing_text_get",
    "aws_api_gateway_method_response.pangu_spacing_text_get_200",
  ]

  rest_api_id = "${aws_api_gateway_rest_api.pangu.id}"
  stage_name  = "v1"
}

resource "aws_lambda_permission" "pangu_root_get" {
  statement_id  = "AllowInvokeFromAPIGateway"
  action        = "lambda:InvokeFunction"
  function_name = "${var.apex_function_introduce}"
  principal     = "apigateway.amazonaws.com"

  source_arn = "arn:aws:execute-api:${var.aws_region}:${data.aws_caller_identity.current.account_id}:${aws_api_gateway_rest_api.pangu.id}/*/${aws_api_gateway_integration.pangu_root_get.http_method}/"
}

resource "aws_lambda_permission" "pangu_spacing_text" {
  statement_id  = "AllowInvokeFromAPIGateway"
  action        = "lambda:InvokeFunction"
  function_name = "${var.apex_function_spacing_text}"
  principal     = "apigateway.amazonaws.com"

  source_arn = "arn:aws:execute-api:${var.aws_region}:${data.aws_caller_identity.current.account_id}:${aws_api_gateway_rest_api.pangu.id}/*/${aws_api_gateway_integration.pangu_spacing_text_get.http_method}${aws_api_gateway_resource.pangu_spacing_text.path}"
}

ref:
https://www.terraform.io/docs/providers/aws/guides/serverless-with-aws-lambda-and-api-gateway.html

# donwload provider plugins
$ apex infra init

# view the generated execution plan
$ apex infra plan

# deploy your infrastructures
$ apex infra apply
$ apex infra apply -auto-approve

ref:
https://apex.run/#managing-infrastructure

kube-lego: Automatically provision TLS certificates in Kubernetes

kube-lego: Automatically provision TLS certificates in Kubernetes

kube-lego automatically requests certificates for Kubernetes Ingress resources from Let's Encrypt.

ref:
https://github.com/jetstack/kube-lego
https://letsencrypt.org/

I run kube-lego v0.1.5 with Kubernetes v1.9.4, everything works very fine.

Deploy kube-lego

It is strongly recommended to try Let's Encrypt Staging API first.

# kube-lego/deployment.yaml
kind: Namespace
apiVersion: v1
metadata:
  name: kube-lego
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-lego
  namespace: kube-lego
data:
  LEGO.EMAIL: "[email protected]"
  # LEGO.URL: "https://acme-v01.api.letsencrypt.org/directory"
  LEGO.URL: "https://acme-staging.api.letsencrypt.org/directory"
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: kube-lego
  namespace: kube-lego
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kube-lego
  template:
    metadata:
      labels:
        app: kube-lego
    spec:
      containers:
      - name: kube-lego
        image: jetstack/kube-lego:0.1.5
        ports:
        - containerPort: 8080
        env:
        - name: LEGO_LOG_LEVEL
          value: debug
        - name: LEGO_EMAIL
          valueFrom:
            configMapKeyRef:
              name: kube-lego
              key: LEGO.EMAIL
        - name: LEGO_URL
          valueFrom:
            configMapKeyRef:
              name: kube-lego
              key: LEGO.URL
        - name: LEGO_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: LEGO_POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        readinessProbe:
          httpGet:
            path: /healthz
            port: 8080
          initialDelaySeconds: 5
          timeoutSeconds: 1

ref:
https://github.com/jetstack/kube-lego/tree/master/examples

$ kubectl apply -f kube-lego/ -R

Configure the Ingress

  • Add an annotation kubernetes.io/tls-acme: "true" to metadata.annotations
  • Add domains to spec.tls.hosts.

spec.tls.secretName is the Secret used to store the certificate received from Let's Encrypt, i.e., tls.key and tls.crt. If no Secret exists with that name, it will be created by kube-lego.

# ingress.yaml
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
  name: simple-project
  annotations:
    kubernetes.io/ingress.class: "gce"
    kubernetes.io/tls-acme: "true"
spec:
  tls:
  - secretName: kittenphile-com-tls
    hosts:
    - kittenphile.com
    - www.kittenphile.com
    - api.kittenphile.com
  rules:
  - host: kittenphile.com
    http:
      paths:
      - path: /*
        backend:
          serviceName: simple-frontend
          servicePort: http
  - host: www.kittenphile.com
    http:
      paths:
      - path: /*
        backend:
          serviceName: simple-frontend
          servicePort: http
  - host: api.kittenphile.com
    http:
      paths:
      - path: /*
        backend:
          serviceName: simple-api
          servicePort: http

ref:
https://kubernetes.io/docs/concepts/services-networking/ingress/#tls

$ kubectl apply -f ingress.yaml

You could find exact ACME challenge paths by inspecting your Ingress resource.

$ kubectl describe ing simple-project
...
TLS:
  kittenphile-com-tls terminates kittenphile.com,www.kittenphile.com,api.kittenphile.com
Rules:
  Host                 Path  Backends
  ----                 ----  --------
kittenphile.com
                       /.well-known/acme-challenge/*   kube-lego-gce:8080 (<none>)
                       /*                              simple-frontend:http (<none>)
www.kittenphile.com
                       /.well-known/acme-challenge/*   kube-lego-gce:8080 (<none>)
                       /*                              simple-frontend:http (<none>)
api.kittenphile.com
                       /.well-known/acme-challenge/*   kube-lego-gce:8080 (<none>)
                       /*                              simple-api:http (<none>)
...

You might want to see logs of kube-lego Pods for observing the progress.

$ kubectl logs -f deploy/kube-lego --namespace kube-lego

Create a Production Certificate

After you make sure everything works ok, you are able to request production certificates for your domains.

Follow these instructions:

  • Change LEGO_URL to https://acme-v01.api.letsencrypt.org/directory
  • Delete account secret kube-lego-account
  • Delete certificate secret kittenphile-com-tls
  • Restart kube-lego
$ kubectl get secrets --all-namespaces
$ kubectl delete secret kube-lego-account --namespace kube-lego && \
  kubectl delete secret kittenphile-com-tls

$ kubectl replace --force -f kube-lego/ -R
$ kubectl logs -f deploy/kube-lego --namespace kube-lego

ref:
https://github.com/jetstack/kube-lego#switching-from-staging-to-production