Deploy TICK stack on Kubernetes: Telegraf, InfluxDB, Chronograf, and Kapacitor

Deploy TICK stack on Kubernetes: Telegraf, InfluxDB, Chronograf, and Kapacitor

TICK stack is a set of open source tools for building a monitoring and alerting system, whose components include:

  • Telegraf: Collect data
  • InfluxDB: Store data
  • Chronograf: Visualize data
  • Kapacitor: Raise alerts

You could consider TICK as an alternative of ELK (Elasticsearch, Logstash, and Kibana).

ref:
https://www.influxdata.com/time-series-platform/

InfluxDB

InfluxDB is a Time Series Database which optimized for time-stamped or time series data. Time series data could be server metrics, application performance monitoring, network data, sensor data, events, clicks, trades in a market, and many other types of analytics data.

ref:
https://www.influxdata.com/time-series-platform/influxdb/

Key Concepts

  • Retention Policy: Database configurations which indicate how long a db keeps data and how many copies of those data are stored in the cluster.
  • Measurement: Conceptually similar to a RDBM table.
  • Point: Conceptually similar to a RDBM row.
  • Timestamp: Primary key of every point.
  • Field: Conceptually similar to a RDBM column (without indexes).
    • Field set menas a pair of Field key and Field value.
  • Tag: Basically indexed field.
  • Series: Data which is in the same measurement, retention policy, and tag set.

All time in InfluxDB is UTC.

ref:
https://docs.influxdata.com/influxdb/v1.5/concepts/key_concepts/
https://docs.influxdata.com/influxdb/v1.5/concepts/glossary/
https://docs.influxdata.com/influxdb/v1.5/concepts/crosswalk/
https://www.jianshu.com/p/a1344ca86e9b

Hardware Guideline

InfluxDB should be run on locally attached SSDs. Any other storage configuration will have lower performance characteristics and may not be able to recover from even small interruptions in normal processing.

ref:
https://docs.influxdata.com/influxdb/v1.5/guides/hardware_sizing/

Deployment

When you create a database, InfluxDB automatically creates a retention policy named autogen which has infinite retention. You may rename that retention policy or disable its auto-creation in the configuration file.

# tick/influxdb/service.yaml
apiVersion: v1
kind: Service
metadata:
  namespace: tick
  name: influxdb
spec:
  clusterIP: None
  selector:
    app: influxdb
  ports:
  - name: api
    port: 8086
    targetPort: api
  - name: admin
    port: 8083
    targetPort: admin
# tick/influxdb/statefulset.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: tick
  name: influxdb
data:
  influxdb.conf: |+
    [meta]
      dir = "/var/lib/influxdb/meta"
      retention-autocreate = false
    [data]
      dir = "/var/lib/influxdb/data"
      engine = "tsm1"
      wal-dir = "/var/lib/influxdb/wal"
  init.iql: |+
    CREATE DATABASE "telegraf" WITH DURATION 90d REPLICATION 1 SHARD DURATION 1h NAME "rp_90d"
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  namespace: tick
  name: influxdb
spec:
  replicas: 1
  selector:
    matchLabels:
      app: influxdb
  serviceName: influxdb
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: ["ReadWriteOnce"]
      storageClassName: ssd-ext4
      resources:
        requests:
          storage: 250Gi
  template:
    metadata:
      labels:
        app: influxdb
    spec:
      volumes:
      - name: config
        configMap:
          name: influxdb
          items:
            - key: influxdb.conf
              path: influxdb.conf
      - name: init-iql
        configMap:
          name: influxdb
          items:
            - key: init.iql
              path: init.iql
      containers:
      - name: influxdb
        image: influxdb:1.5.2-alpine
        ports:
        - name: api
          containerPort: 8086
        - name: admin
          containerPort: 8083
        volumeMounts:
        - name: data
          mountPath: /var/lib/influxdb
        - name: config
          mountPath: /etc/telegraf
        - name: init-iql
          mountPath: /docker-entrypoint-initdb.d
        resources:
          requests:
            cpu: 500m
            memory: 10G
          limits:
            cpu: 4000m
            memory: 10G
        readinessProbe:
          httpGet:
            path: /ping
            port: api
          initialDelaySeconds: 5
          timeoutSeconds: 5

ref:
https://hub.docker.com/_/influxdb/
https://docs.influxdata.com/influxdb/v1.5/administration/config/

$ kubectl apply -f tick/influxdb/ -R

Usage

$ kubectl get all --namespace tick
$ kubectl exec -i -t influxdb-0 --namespace tick -- influx
SHOW DATABASES
USE telegraf
SHOW MEASUREMENTS
SELECT * FROM diskio LIMIT 2
DROP MEASUREMENT access_log

ref:
https://docs.influxdata.com/influxdb/v1.5/query_language/data_download/
https://docs.influxdata.com/influxdb/v1.5/query_language/data_exploration/

Telegraf

Telegraf is a plugin-driven server agent process for collecting arbitrary metrics and writing them to multiple data storages include InfluxDB, Elasticsearch, CloudWatch, and so on.

ref:
https://www.influxdata.com/time-series-platform/telegraf/

Deployment

Collect system metrics per node using DaemonSet:

# tick/telegraf/daemonset.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: tick
  name: telegraf-ds
data:
  telegraf.conf: |+
    [agent]
      interval = "10s"
      round_interval = true
      metric_batch_size = 1000
      metric_buffer_limit = 10000
      collection_jitter = "0s"
      flush_interval = "10s"
      flush_jitter = "0s"
      precision = ""
      debug = true
      quiet = false
      logfile = ""
      hostname = "$HOSTNAME"
      omit_hostname = false
    [[outputs.influxdb]]
      urls = ["http://influxdb.tick.svc.cluster.local:8086"]
      database = "telegraf"
      retention_policy = "rp_90d"
      write_consistency = "any"
      timeout = "5s"
      username = ""
      password = ""
      user_agent = "telegraf"
      insecure_skip_verify = false
    [[inputs.cpu]]
      percpu = true
      totalcpu = true
      collect_cpu_time = false
    [[inputs.disk]]
      ignore_fs = ["tmpfs", "devtmpfs"]
    [[inputs.diskio]]
    [[inputs.docker]]
      endpoint = "unix:///var/run/docker.sock"
      container_names = []
      timeout = "5s"
      perdevice = true
      total = false
    [[inputs.kernel]]
    [[inputs.kubernetes]]
      url = "http://$HOSTNAME:10255"
      bearer_token = "/var/run/secrets/kubernetes.io/serviceaccount/token"
      insecure_skip_verify = true
    [[inputs.mem]]
    [[inputs.processes]]
    [[inputs.swap]]
    [[inputs.system]]
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  namespace: tick
  name: telegraf-ds
spec:
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 3
  selector:
    matchLabels:
      app: telegraf
      type: ds
  template:
    metadata:
      labels:
        app: telegraf
        type: ds
    spec:
      containers:
      - name: telegraf
        image: telegraf:1.5.3-alpine
        env:
        - name: HOSTNAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: "HOST_PROC"
          value: "/rootfs/proc"
        - name: "HOST_SYS"
          value: "/rootfs/sys"
        volumeMounts:
        - name: sys
          mountPath: /rootfs/sys
          readOnly: true
        - name: proc
          mountPath: /rootfs/proc
          readOnly: true
        - name: docker-socket
          mountPath: /var/run/docker.sock
        - name: varrunutmp
          mountPath: /var/run/utmp
          readOnly: true
        - name: varlog
          mountPath: /var/log
          readOnly: true
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: config
          mountPath: /etc/telegraf
          readOnly: true
        resources:
          requests:
            cpu: 50m
            memory: 500Mi
          limits:
            cpu: 200m
            memory: 500Mi
      volumes:
      - name: sys
        hostPath:
          path: /sys
      - name: docker-socket
        hostPath:
          path: /var/run/docker.sock
      - name: proc
        hostPath:
          path: /proc
      - name: varrunutmp
        hostPath:
          path: /var/run/utmp
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: config
        configMap:
          name: telegraf-ds

ref:
https://github.com/influxdata/telegraf/blob/master/docs/CONFIGURATION.md
https://github.com/influxdata/telegraf/blob/master/plugins/inputs/docker/README.md
https://github.com/influxdata/telegraf/blob/master/plugins/inputs/kubernetes/README.md

Collect arbitrary metrics:

# tick/telegraf/deployment.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: tick
  name: telegraf-infra
data:
  telegraf.conf: |+
    [agent]
      interval = "10s"
      round_interval = true
      metric_batch_size = 1000
      metric_buffer_limit = 10000
      collection_jitter = "0s"
      flush_interval = "10s"
      flush_jitter = "0s"
      precision = ""
      debug = true
      quiet = false
      logfile = ""
      hostname = "telegraf-infra"
      omit_hostname = false
    [[outputs.influxdb]]
      urls = ["http://influxdb.tick.svc.cluster.local:8086"]
      database = "telegraf"
      retention_policy = "rp_90d"
      write_consistency = "any"
      timeout = "5s"
      username = ""
      password = ""
      user_agent = "telegraf"
      insecure_skip_verify = false
    [[inputs.http_listener]]
      service_address = ":8186"
    [[inputs.socket_listener]]
      service_address = "udp://:8092"
      data_format = "influx"
    [[inputs.redis]]
      servers = ["tcp://redis-cache.default.svc.cluster.local", "tcp://redis-broker.default.svc.cluster.local"]
    [[inputs.mongodb]]
      servers = ["mongodb://mongodb.default.svc.cluster.local"]
      gather_perdb_stats = true
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: tick
  name: telegraf-infra
spec:
  replicas: 1
  selector:
    matchLabels:
      app: telegraf
      type: infra
  template:
    metadata:
      labels:
        app: telegraf
        type: infra
    spec:
      containers:
      - name: telegraf
        image: telegraf:1.5.3-alpine
        ports:
        - name: udp
          protocol: UDP
          containerPort: 8092
        - name: http
          containerPort: 8186
        volumeMounts:
        - name: config
          mountPath: /etc/telegraf
        resources:
          requests:
            cpu: 50m
            memory: 500Mi
          limits:
            cpu: 500m
            memory: 500Mi
      volumes:
      - name: config
        configMap:
          name: telegraf-infra

ref:
https://github.com/influxdata/telegraf/blob/master/plugins/inputs/exec/README.md
https://github.com/influxdata/telegraf/blob/master/plugins/inputs/mongodb/README.md
https://github.com/influxdata/telegraf/blob/master/plugins/inputs/redis/README.md

$ kubectl apply -f tick/telegraf/ -R

Furthermore, you might also want to parse stdout of all containers, in that case, you could use logparser with DaemonSet:

# telegraf.conf
[agent]
    interval = "1s"
    round_interval = true
    metric_batch_size = 1000
    metric_buffer_limit = 10000
    collection_jitter = "0s"
    flush_interval = "10s"
    flush_jitter = "0s"
    precision = ""
    debug = true
    quiet = false
    logfile = ""
    hostname = "$HOSTNAME"
    omit_hostname = false
[[outputs.file]]
  files = ["stdout"]
[[outputs.influxdb]]
    urls = ["http://influxdb.tick.svc.cluster.local:8086"]
    database = "your_db"
    retention_policy = "rp_90d"
    write_consistency = "any"
    timeout = "5s"
    username = ""
    password = ""
    user_agent = "telegraf"
    insecure_skip_verify = false
    namepass = ["logparser_*"]
[[inputs.logparser]]
    name_override = "logparser_api"
    files = ["/var/log/containers/api*.log"]
    from_beginning = false
    [inputs.logparser.grok]
    measurement = "api_access_log"
    patterns = ["bytes\\} \\[%{DATA:timestamp:ts-ansic}\\] %{WORD:request_method} %{URIPATH:request_path}%{DATA:request_params:drop} =\\\\u003e generated %{NUMBER:response_bytes:int} bytes in %{NUMBER:response_time_ms:int} msecs \\(HTTP/1.1 %{RESPONSE_CODE}"]
[[inputs.logparser]]
    name_override = "logparser_worker"
    files = ["/var/log/containers/worker*.log"]
    from_beginning = false
    [inputs.logparser.grok]
    measurement = "worker_task_log"
    patterns = ['''\[%{TIMESTAMP_ISO8601:timestamp:ts-"2006-01-02 15:04:05"},%{WORD:value1:drop}: %{LOGLEVEL:loglevel:tag}\/MainProcess\] Task %{PROG:task_name:tag}\[%{UUID:task_id:drop}\] %{WORD:execution_status:tag} in %{DURATION:execution_time:duration}''']

ref:
https://github.com/influxdata/telegraf/tree/master/plugins/inputs/logparser
https://github.com/influxdata/telegraf/blob/master/plugins/inputs/logparser/grok/patterns/influx-patterns
https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns

Debug telegraf.conf

$ docker run \
-v $PWD/telegraf.conf:/etc/telegraf/telegraf.conf \
-v $PWD/api-1.log:/var/log/containers/api-1.log \
-v $PWD/worker-1.log:/var/log/containers/worker-1.log \
telegraf

ref:
https://grokdebug.herokuapp.com/

Usage

from telegraf.client import TelegrafClient

client = TelegrafClient(host='telegraf.tick.svc.cluster.local', port=8092)
client.metric('some_measurement', {'value_a': 100, 'value_b': 0, 'value_c': True}, tags={'country': 'taiwan'})

ref:
https://github.com/paksu/pytelegraf

Kapacitor

Kapacitor is so-called a real-time streaming data processing engine, basically, you would use it to trigger alerts.

ref:
https://www.influxdata.com/time-series-platform/kapacitor/

Deployment

# tick/kapacitor/service.yaml
apiVersion: v1
kind: Service
metadata:
  namespace: tick
  name: kapacitor-ss
spec:
  clusterIP: None
  selector:
    app: kapacitor
  ports:
  - name: api
    port: 9092
    targetPort: api
---
apiVersion: v1
kind: Service
metadata:
  namespace: tick
  name: kapacitor
spec:
  type: ClusterIP
  selector:
    app: kapacitor
  ports:
  - name: api
    port: 9092
    targetPort: api
# tick/kapacitor/statefulset.yaml


apiVersion: apps/v1
kind: StatefulSet
metadata:
  namespace: tick
  name: kapacitor
spec:
  replicas: 1
  serviceName: kapacitor-ss
  selector:
    matchLabels:
      app: kapacitor
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: ["ReadWriteOnce"]
      storageClassName: hdd-ext4
      resources:
        requests:
          storage: 10Gi
  template:
    metadata:
      labels:
        app: kapacitor
    spec:
      containers:
      - name: kapacitor
        image: kapacitor:1.4.1-alpine
        env:
        - name: KAPACITOR_HOSTNAME
          value: kapacitor
        - name: KAPACITOR_INFLUXDB_0_URLS_0
          value: http://influxdb.tick.svc.cluster.local:8086
        ports:
        - name: api
          containerPort: 9092
        volumeMounts:
        - name: data
          mountPath: /var/lib/kapacitor
        resources:
          requests:
            cpu: 50m
            memory: 500Mi
          limits:
            cpu: 500m
            memory: 500Mi

ref:
https://docs.influxdata.com/kapacitor/v1.4/

$ kubectl apply -f tick/kapacitor/ -R

Chronograf

Chronograf is the Web UI for TICK stack.

ref:
https://www.influxdata.com/time-series-platform/chronograf/

Deployment

# tick/chronograf/service.yaml
apiVersion: v1
kind: Service
metadata:
  namespace: tick
  name: chronograf-ss
spec:
  clusterIP: None
  selector:
    app: chronograf
---
apiVersion: v1
kind: Service
metadata:
  namespace: tick
  name: chronograf
spec:
  selector:
    app: chronograf
  ports:
  - name: api
    port: 80
    targetPort: api
# tick/chronograf/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  namespace: tick
  name: chronograf
spec:
  replicas: 1
  serviceName: chronograf-ss
  selector:
    matchLabels:
      app: chronograf
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: ["ReadWriteOnce"]
      storageClassName: hdd-ext4
      resources:
        requests:
          storage: 10Gi
  template:
    metadata:
      labels:
        app: chronograf
    spec:
      containers:
      - name: chronograf
        image: chronograf:1.4.4.0-alpine
        command: ["chronograf"]
        args: ["--influxdb-url=http://influxdb.tick.svc.cluster.local:8086", "--kapacitor-url=http://kapacitor.tick.svc.cluster.local:9092"]
        ports:
        - name: api
          containerPort: 8888
        livenessProbe:
          httpGet:
            path: /ping
            port: api
        readinessProbe:
          httpGet:
            path: /ping
            port: api
        volumeMounts:
        - name: data
          mountPath: /var/lib/chronograf
        resources:
          requests:
            cpu: 100m
            memory: 1000Mi
          limits:
            cpu: 2000m
            memory: 1000Mi

ref:
https://docs.influxdata.com/chronograf/v1.4/

$ kubectl apply -f tick/chronograf/ -R
$ kubectl port-forward svc/chronograf 8888:80 --namespace tick

ref:
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/

Apex and Terraform: The easiest way to manage AWS Lambda functions

Apex and Terraform: The easiest way to manage AWS Lambda functions

AWS Lambda lets you run code without provisioning or managing servers, which is so-called Serverless or Function as a Service (FaaS).

Apex is a Go command-line tool to manage and deploy your serverless functions on AWS Lambda. Apex is also integrated with Terraform to provide cloud infrastructure management, for instance, configuring your AWS Lambda functions with Amazon API Gateway.

ref:
https://aws.amazon.com/lambda/
https://aws.amazon.com/api-gateway/
https://github.com/apex/apex

You could browse projects created in this post on GitHub:
https://github.com/vinta/pangu.space
https://github.com/CodeTengu/LambdaBaku

Install

$ curl https://raw.githubusercontent.com/apex/apex/master/install.sh | sh

ref:
http://apex.run/#installation

Initialize

It is recommended to configure your AWS credentials with awscli.

$ pip install awscli
$ aws configure

ref:
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html

To use Apex to manage Lambda functions, you have to make sure your AWS credential has minimum IAM permissions:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "iam:CreateRole",
        "iam:CreatePolicy",
        "iam:AttachRolePolicy",
        "iam:PassRole",
        "lambda:GetFunction",
        "lambda:ListFunctions",
        "lambda:CreateFunction",
        "lambda:DeleteFunction",
        "lambda:InvokeFunction",
        "lambda:GetFunctionConfiguration",
        "lambda:UpdateFunctionConfiguration",
        "lambda:UpdateFunctionCode",
        "lambda:CreateAlias",
        "lambda:UpdateAlias",
        "lambda:GetAlias",
        "lambda:ListAliases",
        "lambda:ListVersionsByFunction",
        "logs:FilterLogEvents",
        "cloudwatch:GetMetricStatistics"
      ],
      "Effect": "Allow",
      "Resource": "*"
    }
  ]
}
$ apex init

ref:
http://apex.run/#getting-started

After running apex init, Apex creates a Role and a Policy. You should be able to find them on AWS IAM Management Console. If you want to access other AWS resources, for instance, S3 buckets, DynamoDB tables, SNS, in your Lambda functions, you must create a new Policy which grants appropriate permissions and attachs itself to the Role that Apex created.

Here is a Policy example of operating certain DynamoDB tables:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt123456789",
            "Effect": "Allow",
            "Action": [
                "dynamodb:*"
            ],
            "Resource": [
                "arn:aws:dynamodb:ap-northeast-1:123456789:table/CodeTengu_Preference",
                "arn:aws:dynamodb:ap-northeast-1:123456789:table/CodeTengu_Preference/*",
                "arn:aws:dynamodb:ap-northeast-1:123456789:table/CodeTengu_WeeklyIssue",
                "arn:aws:dynamodb:ap-northeast-1:123456789:table/CodeTengu_WeeklyIssue/*",
                "arn:aws:dynamodb:ap-northeast-1:123456789:table/CodeTengu_WeeklyPost",
                "arn:aws:dynamodb:ap-northeast-1:123456789:table/CodeTengu_WeeklyPost/*"
            ]
        }
    ]
}

Write Lambda Functions

ref:
https://docs.aws.amazon.com/lambda/latest/dg/current-supported-versions.html
https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html

Node.js

The simplest handler:

const aws = require('aws-sdk');

exports.handle = (event, context, callback) => {
  doYourShit();
  callback(null, 'DONE');
};

ref:
https://docs.aws.amazon.com/lambda/latest/dg/programming-model.html

Call another Lambda function in a Lambda function:

You must make sure your Lambda role has the permission of invoking other Lambda functions.

const util = require('util');

const aws = require('aws-sdk');

const params = {
  FunctionName: 'LambdaBaku_syncIssue',
  InvocationType: 'Event', // means asynchronous execution
  Payload: JSON.stringify({ issue_number: curatedIssue.number }),
};

lambda.invoke(params, (err, data) => {
  if (err) {
    console.log('FAIL', params);
    console.log(util.inspect(err));
  } else {
    console.log(data);
  }
});

ref:
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Lambda.html
https://stackoverflow.com/questions/31714788/can-an-aws-lambda-function-call-another

Go

Write a Lambda function triggered by Amazon API Gateway:

package main

import (
    "encoding/json"
    "errors"
    "log"

    "github.com/aws/aws-lambda-go/events"
    "github.com/aws/aws-lambda-go/lambda"
    "github.com/vinta/pangu"
)

var (
    // ErrTextNotProvided is thrown when text is not provided in HTTP query string
    ErrTextNotProvided = errors.New("No text was provided in HTTP query string")
)

// Handler is the AWS Lambda function handler
func Handler(request events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) {
    log.Printf("request id: %s\n", request.RequestContext.RequestID)

    text, ok := request.QueryStringParameters["t"]
    if !ok {
        errMap := map[string]string{
            "message": ErrTextNotProvided.Error(),
        }
        errMapJSON, _ := json.MarshalIndent(errMap, "", " ")

        return events.APIGatewayProxyResponse{
            Body: string(errMapJSON),
            StatusCode: 400,
        }, nil
    }

    log.Printf("text: %s\n", text)

    textPlainHeaders := map[string]string{
        "content-type": "text/plain; charset=utf-8",
    }

    return events.APIGatewayProxyResponse{
        Body: pangu.SpacingText(text),
        Headers: textPlainHeaders,
        StatusCode: 200,
    }, nil
}

func main() {
    lambda.Start(Handler)
}

ref:
https://aws.amazon.com/blogs/compute/announcing-go-support-for-aws-lambda/
https://docs.aws.amazon.com/lambda/latest/dg/go-programming-model-handler-types.html
https://docs.aws.amazon.com/lambda/latest/dg/go-programming-model-errors.html

Your "Integration Request" configurations in API Gateway should be like:

  • Integration type: Lambda Function
  • Use Lambda Proxy integration: Yes
  • Lambda Region: ap-northeast-1
  • Lambda Function: panguspace_spacing_text
  • Invoke with caller credentials: No
  • Credentials cache: Do not add caller credentials to cache key
  • Use Default Timeout: Yes

It's also worth noting that the API response is mainly defined by APIGatewayProxyResponse in Lambda function code. Configurations in API Gateway, i.e., "Integration Response" and "Method Response" do not matter.

ref:
https://docs.aws.amazon.com/apigateway/latest/developerguide/getting-started-with-lambda-integration.html

Usage

Deploy all functions:

$ apex deploy

ref:
http://apex.run/#deploying-functions

Invoke a function:

# invoke a function directly
$ apex invoke spacing_text --logs
{
    "statusCode": 400,
    "headers": null,
    "body":"{\"message\": \"No text was provided in the HTTP query string\"}"
}

# invoke a function with an API Gateway event
$ cat fixtures/spacing_text_event.json
{
    "queryStringParameters": {"t": "與PM戰鬥的人,應當小心自己不要成為PM"}
}
$ apex invoke spacing_text --logs < fixtures/spacing_text_event.json
{
    "statusCode": 200,
    "headers": {"content-type": "text/plain; charset=utf-8"},
    "body": "與 PM 戰鬥的人,應當小心自己不要成為 PM"
}

ref:
http://apex.run/#invoking-functions

View logs which might delay several seconds:

$ apex logs -f

Pack a function:

$ apex build spacing_text > spacing_text.zip

Configure API Gateway

Create API Keys

To setup API keys, do the following:

  1. Configure your API methods to require an API key
  2. Deploy your API
  3. Create an API key for the API in a region
  4. Create an Usage Plan and assign an API key with a certain Stage

In step 1, your "Method Request" configurations in API Gateway should be like:

  • Authorization: NONE
  • Request Validator: NONE
  • API Key Required: true

Now you are able to call the API with a x-api-key header:

$ curl -H "x-api-key: YOUR-API-KEY" https://xxx.execute-api.ap-northeast-1.amazonaws.com/v1/your-endpoint/

ref:
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-create-usage-plans-with-rest-api.html
https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-use-postman-to-call-api.html

Actually, you could release your APIs without API keys if you like.

Setup a Custom Domain

To setup a custom domain which managed by Cloudflare, see the following link:
https://stackoverflow.com/a/46061708/885524

It is worth noting that even the Stack Overflow answer said using Full (Strict) SSL mode but actually Full also works.

Moreover, it might take a long time to generate "Target Domain Name" (xxx.cloudfront.net).

Don't forget to add "Base Path Mappings" in API Gateway Custom Domain Names:

  • api.pangu.space
    • Target Domain Name: xxx.cloudfront.net
    • ACM Certificate: *.pangu.space
    • Base Path Mappings:
      • Path: /v1
      • Destination: Pangu:v1

Manage Infrastructures with Terraform

Terraform is a tool to manage your cloud infrastructures as code.

$ brew install terraform

$ tree .
.
├── functions
│   ├── introduce
│   │   └── main.go
│   └── spacing_text
│       └── main.go
└── infrastructure
    ├── main.tf
    └── variables.tf

Define variables and data sources:

# infrastructure/variables.tf
data "aws_caller_identity" "current" {}

variable "aws_region" {}
variable "apex_environment" {}
variable "apex_function_role" {}

variable "apex_function_arns" {
  type = "map"
}

variable "apex_function_names" {
  type = "map"
}

variable "apex_function_introduce" {}
variable "apex_function_spacing_text" {}

ref:
https://www.terraform.io/docs/providers/aws/d/caller_identity.html

Define AWS resources:

# infrastructure/main.tf
resource "aws_api_gateway_rest_api" "pangu" {
  name = "Pangu"
}

resource "aws_api_gateway_method" "pangu_root" {
  rest_api_id   = "${aws_api_gateway_rest_api.pangu.id}"
  resource_id   = "${aws_api_gateway_rest_api.pangu.root_resource_id}"
  http_method   = "GET"
  authorization = "NONE"
}

resource "aws_api_gateway_integration" "pangu_root_get" {
  rest_api_id             = "${aws_api_gateway_rest_api.pangu.id}"
  resource_id             = "${aws_api_gateway_rest_api.pangu.root_resource_id}"
  http_method             = "${aws_api_gateway_method.pangu_root.http_method}"
  integration_http_method = "POST"
  type                    = "AWS_PROXY"
  uri                     = "arn:aws:apigateway:${var.aws_region}:lambda:path/2015-03-31/functions/${var.apex_function_introduce}/invocations"
}

resource "aws_api_gateway_method_response" "pangu_root_get_200" {
  rest_api_id = "${aws_api_gateway_rest_api.pangu.id}"
  resource_id = "${aws_api_gateway_rest_api.pangu.root_resource_id}"
  http_method = "${aws_api_gateway_method.pangu_root.http_method}"
  status_code = "200"

  response_models = {
    "application/json" = "Empty"
  }

  response_parameters = {
    "method.response.header.Access-Control-Allow-Origin" = true
  }
}

resource "aws_api_gateway_resource" "pangu_spacing_text" {
  rest_api_id = "${aws_api_gateway_rest_api.pangu.id}"
  parent_id   = "${aws_api_gateway_rest_api.pangu.root_resource_id}"
  path_part   = "spacing-text"
}

resource "aws_api_gateway_method" "pangu_spacing_text_get" {
  rest_api_id      = "${aws_api_gateway_rest_api.pangu.id}"
  resource_id      = "${aws_api_gateway_resource.pangu_spacing_text.id}"
  http_method      = "GET"
  authorization    = "NONE"
  api_key_required = true
}

resource "aws_api_gateway_integration" "pangu_spacing_text_get" {
  rest_api_id             = "${aws_api_gateway_rest_api.pangu.id}"
  resource_id             = "${aws_api_gateway_resource.pangu_spacing_text.id}"
  http_method             = "${aws_api_gateway_method.pangu_spacing_text_get.http_method}"
  integration_http_method = "POST"
  type                    = "AWS_PROXY"
  uri                     = "arn:aws:apigateway:${var.aws_region}:lambda:path/2015-03-31/functions/${var.apex_function_spacing_text}/invocations"
}

resource "aws_api_gateway_method_response" "pangu_spacing_text_get_200" {
  rest_api_id = "${aws_api_gateway_rest_api.pangu.id}"
  resource_id = "${aws_api_gateway_resource.pangu_spacing_text.id}"
  http_method = "${aws_api_gateway_method.pangu_spacing_text_get.http_method}"
  status_code = "200"

  response_models = {
    "application/json" = "Empty"
  }

  response_parameters = {
    "method.response.header.Access-Control-Allow-Origin" = true
  }
}

resource "aws_api_gateway_deployment" "pangu" {
  depends_on = [
    "aws_api_gateway_method.pangu_root",
    "aws_api_gateway_integration.pangu_root_get",
    "aws_api_gateway_method_response.pangu_root_get_200",
    "aws_api_gateway_resource.pangu_spacing_text",
    "aws_api_gateway_method.pangu_spacing_text_get",
    "aws_api_gateway_integration.pangu_spacing_text_get",
    "aws_api_gateway_method_response.pangu_spacing_text_get_200",
  ]

  rest_api_id = "${aws_api_gateway_rest_api.pangu.id}"
  stage_name  = "v1"
}

resource "aws_lambda_permission" "pangu_root_get" {
  statement_id  = "AllowInvokeFromAPIGateway"
  action        = "lambda:InvokeFunction"
  function_name = "${var.apex_function_introduce}"
  principal     = "apigateway.amazonaws.com"

  source_arn = "arn:aws:execute-api:${var.aws_region}:${data.aws_caller_identity.current.account_id}:${aws_api_gateway_rest_api.pangu.id}/*/${aws_api_gateway_integration.pangu_root_get.http_method}/"
}

resource "aws_lambda_permission" "pangu_spacing_text" {
  statement_id  = "AllowInvokeFromAPIGateway"
  action        = "lambda:InvokeFunction"
  function_name = "${var.apex_function_spacing_text}"
  principal     = "apigateway.amazonaws.com"

  source_arn = "arn:aws:execute-api:${var.aws_region}:${data.aws_caller_identity.current.account_id}:${aws_api_gateway_rest_api.pangu.id}/*/${aws_api_gateway_integration.pangu_spacing_text_get.http_method}${aws_api_gateway_resource.pangu_spacing_text.path}"
}

ref:
https://www.terraform.io/docs/providers/aws/guides/serverless-with-aws-lambda-and-api-gateway.html

# donwload provider plugins
$ apex infra init

# view the generated execution plan
$ apex infra plan

# deploy your infrastructures
$ apex infra apply
$ apex infra apply -auto-approve

ref:
http://apex.run/#managing-infrastructure

kube-lego: Automatically provision TLS certificates in Kubernetes

kube-lego: Automatically provision TLS certificates in Kubernetes

kube-lego automatically requests certificates for Kubernetes Ingress resources from Let's Encrypt.

ref:
https://github.com/jetstack/kube-lego
https://letsencrypt.org/

I run kube-lego v0.1.5 with Kubernetes v1.9.4, everything works very fine.

Deploy kube-lego

It is strongly recommended to try Let's Encrypt Staging API first.

# kube-lego/deployment.yaml
kind: Namespace
apiVersion: v1
metadata:
  name: kube-lego
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-lego
  namespace: kube-lego
data:
  LEGO.EMAIL: "[email protected]"
  # LEGO.URL: "https://acme-v01.api.letsencrypt.org/directory"
  LEGO.URL: "https://acme-staging.api.letsencrypt.org/directory"
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: kube-lego
  namespace: kube-lego
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kube-lego
  template:
    metadata:
      labels:
        app: kube-lego
    spec:
      containers:
      - name: kube-lego
        image: jetstack/kube-lego:0.1.5
        ports:
        - containerPort: 8080
        env:
        - name: LEGO_LOG_LEVEL
          value: debug
        - name: LEGO_EMAIL
          valueFrom:
            configMapKeyRef:
              name: kube-lego
              key: LEGO.EMAIL
        - name: LEGO_URL
          valueFrom:
            configMapKeyRef:
              name: kube-lego
              key: LEGO.URL
        - name: LEGO_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: LEGO_POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        readinessProbe:
          httpGet:
            path: /healthz
            port: 8080
          initialDelaySeconds: 5
          timeoutSeconds: 1

ref:
https://github.com/jetstack/kube-lego/tree/master/examples

$ kubectl apply -f kube-lego/ -R

Configure the Ingress

  • Add an annotation kubernetes.io/tls-acme: "true" to metadata.annotations
  • Add domains to spec.tls.hosts.

spec.tls.secretName is the Secret used to store the certificate received from Let's Encrypt, i.e., tls.key and tls.crt. If no Secret exists with that name, it will be created by kube-lego.

# ingress.yaml
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
  name: simple-project
  annotations:
    kubernetes.io/ingress.class: "gce"
    kubernetes.io/tls-acme: "true"
spec:
  tls:
  - secretName: kittenphile-com-tls
    hosts:
    - kittenphile.com
    - www.kittenphile.com
    - api.kittenphile.com
  rules:
  - host: kittenphile.com
    http:
      paths:
      - path: /*
        backend:
          serviceName: simple-frontend
          servicePort: http
  - host: www.kittenphile.com
    http:
      paths:
      - path: /*
        backend:
          serviceName: simple-frontend
          servicePort: http
  - host: api.kittenphile.com
    http:
      paths:
      - path: /*
        backend:
          serviceName: simple-api
          servicePort: http

ref:
https://kubernetes.io/docs/concepts/services-networking/ingress/#tls

$ kubectl apply -f ingress.yaml

You could find exact ACME challenge paths by inspecting your Ingress resource.

$ kubectl describe ing simple-project
...
TLS:
  kittenphile-com-tls terminates kittenphile.com,www.kittenphile.com,api.kittenphile.com
Rules:
  Host                 Path  Backends
  ----                 ----  --------
kittenphile.com
                       /.well-known/acme-challenge/*   kube-lego-gce:8080 (<none>)
                       /*                              simple-frontend:http (<none>)
www.kittenphile.com
                       /.well-known/acme-challenge/*   kube-lego-gce:8080 (<none>)
                       /*                              simple-frontend:http (<none>)
api.kittenphile.com
                       /.well-known/acme-challenge/*   kube-lego-gce:8080 (<none>)
                       /*                              simple-api:http (<none>)
...

You might want to see logs of kube-lego Pods for observing the progress.

$ kubectl logs -f deploy/kube-lego --namespace kube-lego

Create a Production Certificate

After you make sure everything works ok, you are able to request production certificates for your domains.

Follow these instructions:

  • Change LEGO_URL to https://acme-v01.api.letsencrypt.org/directory
  • Delete account secret kube-lego-account
  • Delete certificate secret kittenphile-com-tls
  • Restart kube-lego
$ kubectl get secrets --all-namespaces
$ kubectl delete secret kube-lego-account --namespace kube-lego && \
  kubectl delete secret kittenphile-com-tls

$ kubectl replace --force -f kube-lego/ -R
$ kubectl logs -f deploy/kube-lego --namespace kube-lego

ref:
https://github.com/jetstack/kube-lego#switching-from-staging-to-production

cert-manager: Automatically provision TLS certificates in Kubernetes

cert-manager: Automatically provision TLS certificates in Kubernetes

cert-manager is an addon for automatically generating TLS certificates from Let's Encrypt for your Kubernetes cluster, which also is the official successor of kube-lego.

ref:
https://github.com/jetstack/cert-manager
https://letsencrypt.org/

If you are interfering with kube-lego, see the following link:

kube-lego: Automatically provision TLS certificates in Kubernetes
https://vinta.ws/code/kube-lego-automatically-provision-tls-certificates-in-kubernetes.html

Install

Assuming you already have Helm setup. If not, see the following link:

Helm: the package manager for Kubernetes
https://vinta.ws/code/helm-the-package-manager-for-kubernetes.html

$ helm install \
--name cert-manager \
--set rbac.create=false \
stable/cert-manager

$ helm ls --all cert-manager

$ kubectl logs deploy/cert-manager-cert-manager cert-manager -f
$ kubectl logs deploy/cert-manager-cert-manager ingress-shim -f

ref:
https://github.com/jetstack/cert-manager/blob/master/docs/user-guides/deploying.md
https://docs.helm.sh/helm/#helm-install

Create Cluster Issuers

An Issuer is a Certificate Authority who provisions TLS Certificates for your domains, for instance, Let's Encrypt.

spec.acme.privateKeySecretRef is the Secret used to store the ACME account private key, cert-manager creates it for you.

# cert-manager/issuer.yaml
kind: ClusterIssuer
apiVersion: certmanager.k8s.io/v1alpha1
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v01.api.letsencrypt.org/directory
    email: [email protected]
    privateKeySecretRef:
      name: letsencrypt-prod-private-key
    http01: {}
---
kind: ClusterIssuer
apiVersion: certmanager.k8s.io/v1alpha1
metadata:
  name: letsencrypt-staging
spec:
  acme:
    server: https://acme-staging.api.letsencrypt.org/directory
    email: [email protected]
    privateKeySecretRef:
      name: letsencrypt-staging-private-key
    http01: {}
$ kubectl apply -f cert-manager/issuer.yaml

$ kubectl get clusterissuers
$ kubectl describe clusterissuer letsencrypt-staging

$ kubectl get secrets --all-namespaces
NAMESPACE     NAME                                    TYPE                                  DATA      AGE
default       cert-manager-cert-manager-token-5j4gw   kubernetes.io/service-account-token   3         6m
kube-system   letsencrypt-prod-private-key            Opaque                                1         40s
kube-system   letsencrypt-staging-private-key         Opaque                                1         40s
...

ref:
https://github.com/jetstack/cert-manager/blob/master/docs/user-guides/cluster-issuers.md
https://github.com/jetstack/cert-manager/tree/master/docs/api-types/issuer

Create the Ingress

Assuming you already have an Ingress like this:

# ingress.yaml
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
  name: simple-project
  annotations:
    kubernetes.io/ingress.class: "gce"
spec:
  rules:
  - host: kittenphile.com
    http:
      paths:
      - path: /*
        backend:
          serviceName: simple-frontend
          servicePort: http
  - host: api.kittenphile.com
    http:
      paths:
      - path: /*
        backend:
          serviceName: simple-api
          servicePort: http

Before you test certificate provisions, you must add A DNS records which point to the "Address" of the Ingress for all your domains.

$ kubectl apply -f ingress.yaml

$ kubectl describe ing simple-project
Name:             simple-project
Namespace:        default
Address:          12.34.56.78
Default backend:  default-http-backend:80 (10.44.2.5:8080)

$ dig kittenphile.com

Create a Staging Certificate

Let's Encrypt production API has a rate limit of 20 requests per domain per week, so it is strongly recommended to first use staging API for testing your configurations.

A Certificate contains the information required to make a certificate signing request for a given Issuer.

# cert-manager/certificate.yaml
kind: Certificate
apiVersion: certmanager.k8s.io/v1alpha1
metadata:
  name: kittenphile-com
spec:
  secretName: kittenphile-com-tls
  issuerRef:
    name: letsencrypt-staging
    kind: ClusterIssuer
  commonName: kittenphile.com
  dnsNames:
  - kittenphile.com
  - api.kittenphile.com
  acme:
    config:
    - http01:
        ingress: simple-project
      domains:
      - kittenphile.com
      - api.kittenphile.com

ref:
https://github.com/jetstack/cert-manager/blob/master/docs/user-guides/acme-http-validation.md
https://blog.n1analytics.com/free-automated-tls-certificates-on-k8s/

Configure the Ingress

Add domains you want to have TLS certificates to spec.tls.hosts.

spec.tls.secretName is the Secret used to store the certificate received from Let's Encrypt, i.e., tls.key and tls.crt.

# ingress.yaml
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
  name: simple-project
  annotations:
    kubernetes.io/ingress.class: "gce"
spec:
  tls:
  - secretName: kittenphile-com-tls
    hosts:
    - kittenphile.com
    - api.kittenphile.com
  rules:
  - host: kittenphile.com
    http:
      paths:
      - path: /*
        backend:
          serviceName: simple-frontend
          servicePort: http
  - host: api.kittenphile.com
    http:
      paths:
      - path: /*
        backend:
          serviceName: simple-api
          servicePort: http

ref:
https://kubernetes.io/docs/concepts/services-networking/ingress/#tls

cert-manager watches new domain entries in any Certificate resource, requests certificates from Let's Encrypt for new domains, and creates ACME HTTP-01 challenge endpoints which are attached to the Ingress automatically.

You could see the issuing progress in "Events" section of kittenphile-com certificate.

$ kubectl logs deploy/cert-manager-cert-manager cert-manager -f

$ kubectl apply -f ingress.yaml
$ kubectl apply -f cert-manager/certificate.yaml

$ kubectl describe certificate kittenphile-com
...
Events:
  Type    Reason               Age                From                     Message
  ----    ------               ----               ----                     -------
  Normal  PresentChallenge     5m                 cert-manager-controller  Presenting http-01 challenge for domain kittenphile.com
  Normal  PresentChallenge     5m                 cert-manager-controller  Presenting http-01 challenge for domain api.kittenphile.com
  Normal  SelfCheck            5m                 cert-manager-controller  Performing self-check for domain kittenphile.com
  Normal  SelfCheck            5m                 cert-manager-controller  Performing self-check for domain api.kittenphile.com
  Normal  ObtainAuthorization  25s                cert-manager-controller  Obtained authorization for domain kittenphile.com
  Normal  ObtainAuthorization  36s                cert-manager-controller  Obtained authorization for domain api.kittenphile.com
  Normal  RenewalScheduled     19s (x3 over 23s)  cert-manager-controller  Certificate scheduled for renewal in 1438 hours
  Normal  CeritifcateIssued    19s (x3 over 24s)  cert-manager-controller  Certificated issued successfully
...

You could also find the exact ACME challenge path by inspecting your Ingress resource.

$ kubectl describe ing simple-project
...
TLS:
  kittenphile-com-tls terminates kittenphile.com,api.kittenphile.com
Rules:
  Host                Path  Backends
  ----                ----  --------
kittenphile.com
                      /*                                                  simple-frontend:http (<none>)
                      /.well-known/acme-challenge/ltvlVWEXTup5BqEsztirs   cm-kittenphile-com-gikjk:8089 (<none>)
api.kittenphile.com
                      /*                                                  simple-api:http (<none>)
                      /.well-known/acme-challenge/kd08LK93Fkdf653h9dfjj   cm-kittenphile-com-hgdkd:8090 (<none>)
...

It's also worth noting, when using the Google Cloud's Ingress controller (kubernetes.io/ingress.class: "gce"), changes to load balancers might take up to 10 minutes to propagate. cert-manager sets a timeout of 15 minutes on HTTP validations to allow for this.

ref:
https://github.com/jetstack/cert-manager/issues/285

Create a Production Certificate

After you make sure all configurations are correct, just change the Certificate manifest's spec.issuerRef.name to letsencrypt-prod. Also, delete the staging Certificate and TLS Secret.

$ kubectl delete certificate kittenphile-com && \
  kubectl delete secret kittenphile-com-tls

$ kubectl apply -f cert-manager/certificate.yaml

$ kubectl describe certificate kittenphile-com
$ kubectl describe ing simple-project

cert-manager attaches temporarily generated Services to the Ingress for presenting ACME HTTP-01 challenges of each domains, which changes configurations of the Ingress. Don't forget that Google Cloud's Ingress controller might take a long time to propagate settings.

Provision automatically with ingress-shim

As of cert-manager v0.2.4, the ingress-shim seems to have some issues, for instance, it can not detect new domains which were added after the first issuing. The workaround is to create Certificate manifests manually, in other words, don't use ingress-shim.

$ helm upgrade \
cert-manager \
stable/cert-manager \
--set ingressShim.extraArgs='{--default-issuer-name=letsencrypt-prod,--default-issuer-kind=ClusterIssuer}'

ref:
https://github.com/jetstack/cert-manager/blob/master/docs/user-guides/ingress-shim.md

Migrate from kube-lego

Scale down and make sure kube-lego Pods are no longer running.

$ kubectl scale \
--namespace kube-lego \
--replicas=0 \
deployment kube-lego

$ kubectl get pods --namespace kube-lego

Download a copy of your ACME account private key which created by kube-lego.

$ kubectl get secret \
--namespace kube-lego \
-o yaml \
--export kube-lego-account > cert-manager/secret.yaml

Change metadata.name to something more relevant to cert-manager.

# cert-manager/secret.yaml
kind: Secret
apiVersion: v1
metadata:
  name: letsencrypt-prod-private-key
type: Opaque
data:
  acme-registration-url: XXX
  tls.key: XXX

Deploy cert-manager's Issuers and Certificates. Make sure your Certificate matches domains specified in the Ingress.

$ kubectl apply -f cert-manager/secret.yaml && \
  kubectl apply -f cert-manager/issuer.yaml && \
  kubectl apply -f cert-manager/certificate.yaml

ref:
https://github.com/jetstack/cert-manager/blob/master/docs/user-guides/migrating-from-kube-lego.md

Helm: the package manager for Kubernetes

Helm: the package manager for Kubernetes

Helm is a tool for managing Kubernetes charts. Charts are packages of pre-configured Kubernetes resources. Think of it like apt for Kubernetes.

ref:
https://github.com/kubernetes/helm

Install

Install Helm client, helm.

$ brew install kubernetes-helm

ref:
https://docs.helm.sh/using_helm/#installing-helm

Install Helm server, tiller.

On Kubernetes v1.8+, Role-Based Access Control (RBAC) is enabled by default. So you must create a ClusterRoleBinding which specifies a ClusterRole and a ServiceAccount for tiller.

# helm/rbac.yaml
kind: ServiceAccount
apiVersion: v1
metadata:
  name: tiller
  namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system
$ kubectl apply -f helm/rbac.yaml && \
  helm init --service-account tiller

$ kubectl get all --namespace kube-system

# uninstall tiller
$ helm reset

ref:
https://docs.helm.sh/using_helm/#role-based-access-control
https://docs.helm.sh/using_helm/#installing-tiller

Usage

# show both the client and server version to make sure installation is correct
$ helm version
Client: &version.Version{SemVer:"v2.8.2", GitCommit:"a80231648a1473929271764b920a8e346f6de844", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.8.2", GitCommit:"a80231648a1473929271764b920a8e346f6de844", GitTreeState:"clean"}

# install a Chart
$ helm install \
--name cert-manager \
--namespace kube-system \
stable/cert-manager

# install a Chart without RBAC
$ helm install \
--name cert-manager \
--namespace kube-system \
--set rbac.create=false \
stable/cert-manager

# list Releases
$ helm ls
$ helm ls --all cert-manager

# delete a Release
$ helm del --purge cert-manager

ref:
https://docs.helm.sh/helm/#helm

Charts

In my humble opinion, it might be more flexible and realistic just to copy YAML files from a Helm chart and maintain them, instead of running a helm install.

ref:
https://github.com/kubernetes/charts