Deploy TICK stack on Kubernetes: Telegraf, InfluxDB, Chronograf, and Kapacitor

Deploy TICK stack on Kubernetes: Telegraf, InfluxDB, Chronograf, and Kapacitor

TICK stack is a set of open source tools for building a monitoring and alerting system, whose components include:

  • Telegraf: Collect data
  • InfluxDB: Store data
  • Chronograf: Visualize data
  • Kapacitor: Raise alerts

You could consider TICK as an alternative of ELK (Elasticsearch, Logstash, and Kibana).

ref:
https://www.influxdata.com/time-series-platform/

InfluxDB

InfluxDB is a Time Series Database which optimized for time-stamped or time series data. Time series data could be server metrics, application performance monitoring, network data, sensor data, events, clicks, trades in a market, and many other types of analytics data.

ref:
https://www.influxdata.com/time-series-platform/influxdb/

Key Concepts

  • Retention Policy: Database configurations which indicate how long a db keeps data and how many copies of those data are stored in the cluster.
  • Measurement: Conceptually similar to a RDBM table.
  • Point: Conceptually similar to a RDBM row.
  • Timestamp: Primary key of every point.
  • Field: Conceptually similar to a RDBM column (without indexes).
    • Field set menas a pair of Field key and Field value.
  • Tag: Basically indexed field.
  • Series: Data which is in the same measurement, retention policy, and tag set.

All time in InfluxDB is UTC.

ref:
https://docs.influxdata.com/influxdb/v1.5/concepts/key_concepts/
https://docs.influxdata.com/influxdb/v1.5/concepts/glossary/
https://docs.influxdata.com/influxdb/v1.5/concepts/crosswalk/
https://www.jianshu.com/p/a1344ca86e9b

Hardware Guideline

InfluxDB should be run on locally attached SSDs. Any other storage configuration will have lower performance characteristics and may not be able to recover from even small interruptions in normal processing.

ref:
https://docs.influxdata.com/influxdb/v1.5/guides/hardware_sizing/

Deployment

When you create a database, InfluxDB automatically creates a retention policy named autogen which has infinite retention. You may rename that retention policy or disable its auto-creation in the configuration file.

# tick/influxdb/service.yaml
apiVersion: v1
kind: Service
metadata:
  namespace: tick
  name: influxdb
spec:
  clusterIP: None
  selector:
    app: influxdb
  ports:
  - name: api
    port: 8086
    targetPort: api
  - name: admin
    port: 8083
    targetPort: admin
# tick/influxdb/statefulset.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: tick
  name: influxdb
data:
  influxdb.conf: |+
    [meta]
      dir = "/var/lib/influxdb/meta"
      retention-autocreate = false
    [data]
      dir = "/var/lib/influxdb/data"
      engine = "tsm1"
      wal-dir = "/var/lib/influxdb/wal"
  init.iql: |+
    CREATE DATABASE "telegraf" WITH DURATION 90d REPLICATION 1 SHARD DURATION 1h NAME "rp_90d"
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  namespace: tick
  name: influxdb
spec:
  replicas: 1
  selector:
    matchLabels:
      app: influxdb
  serviceName: influxdb
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: ["ReadWriteOnce"]
      storageClassName: ssd-ext4
      resources:
        requests:
          storage: 250Gi
  template:
    metadata:
      labels:
        app: influxdb
    spec:
      volumes:
      - name: config
        configMap:
          name: influxdb
          items:
            - key: influxdb.conf
              path: influxdb.conf
      - name: init-iql
        configMap:
          name: influxdb
          items:
            - key: init.iql
              path: init.iql
      containers:
      - name: influxdb
        image: influxdb:1.5.2-alpine
        ports:
        - name: api
          containerPort: 8086
        - name: admin
          containerPort: 8083
        volumeMounts:
        - name: data
          mountPath: /var/lib/influxdb
        - name: config
          mountPath: /etc/telegraf
        - name: init-iql
          mountPath: /docker-entrypoint-initdb.d
        resources:
          requests:
            cpu: 500m
            memory: 10G
          limits:
            cpu: 4000m
            memory: 10G
        readinessProbe:
          httpGet:
            path: /ping
            port: api
          initialDelaySeconds: 5
          timeoutSeconds: 5

ref:
https://hub.docker.com/_/influxdb/
https://docs.influxdata.com/influxdb/v1.5/administration/config/

$ kubectl apply -f tick/influxdb/ -R

Usage

$ kubectl get all --namespace tick
$ kubectl exec -i -t influxdb-0 --namespace tick -- influx
SHOW DATABASES
USE telegraf
SHOW MEASUREMENTS
SELECT * FROM diskio LIMIT 2
DROP MEASUREMENT access_log

ref:
https://docs.influxdata.com/influxdb/v1.5/query_language/data_download/
https://docs.influxdata.com/influxdb/v1.5/query_language/data_exploration/

Telegraf

Telegraf is a plugin-driven server agent process for collecting arbitrary metrics and writing them to multiple data storages include InfluxDB, Elasticsearch, CloudWatch, and so on.

ref:
https://www.influxdata.com/time-series-platform/telegraf/

Deployment

Collect system metrics per node using DaemonSet:

# tick/telegraf/daemonset.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: tick
  name: telegraf-ds
data:
  telegraf.conf: |+
    [agent]
      interval = "10s"
      round_interval = true
      metric_batch_size = 1000
      metric_buffer_limit = 10000
      collection_jitter = "0s"
      flush_interval = "10s"
      flush_jitter = "0s"
      precision = ""
      debug = true
      quiet = false
      logfile = ""
      hostname = "$HOSTNAME"
      omit_hostname = false
    [[outputs.influxdb]]
      urls = ["http://influxdb.tick.svc.cluster.local:8086"]
      database = "telegraf"
      retention_policy = "rp_90d"
      write_consistency = "any"
      timeout = "5s"
      username = ""
      password = ""
      user_agent = "telegraf"
      insecure_skip_verify = false
    [[inputs.cpu]]
      percpu = true
      totalcpu = true
      collect_cpu_time = false
    [[inputs.disk]]
      ignore_fs = ["tmpfs", "devtmpfs"]
    [[inputs.diskio]]
    [[inputs.docker]]
      endpoint = "unix:///var/run/docker.sock"
      container_names = []
      timeout = "5s"
      perdevice = true
      total = false
    [[inputs.kernel]]
    [[inputs.kubernetes]]
      url = "http://$HOSTNAME:10255"
      bearer_token = "/var/run/secrets/kubernetes.io/serviceaccount/token"
      insecure_skip_verify = true
    [[inputs.mem]]
    [[inputs.processes]]
    [[inputs.swap]]
    [[inputs.system]]
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  namespace: tick
  name: telegraf-ds
spec:
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 3
  selector:
    matchLabels:
      app: telegraf
      type: ds
  template:
    metadata:
      labels:
        app: telegraf
        type: ds
    spec:
      containers:
      - name: telegraf
        image: telegraf:1.5.3-alpine
        env:
        - name: HOSTNAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: "HOST_PROC"
          value: "/rootfs/proc"
        - name: "HOST_SYS"
          value: "/rootfs/sys"
        volumeMounts:
        - name: sys
          mountPath: /rootfs/sys
          readOnly: true
        - name: proc
          mountPath: /rootfs/proc
          readOnly: true
        - name: docker-socket
          mountPath: /var/run/docker.sock
        - name: varrunutmp
          mountPath: /var/run/utmp
          readOnly: true
        - name: varlog
          mountPath: /var/log
          readOnly: true
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: config
          mountPath: /etc/telegraf
          readOnly: true
        resources:
          requests:
            cpu: 50m
            memory: 500Mi
          limits:
            cpu: 200m
            memory: 500Mi
      volumes:
      - name: sys
        hostPath:
          path: /sys
      - name: docker-socket
        hostPath:
          path: /var/run/docker.sock
      - name: proc
        hostPath:
          path: /proc
      - name: varrunutmp
        hostPath:
          path: /var/run/utmp
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: config
        configMap:
          name: telegraf-ds

ref:
https://github.com/influxdata/telegraf/blob/master/docs/CONFIGURATION.md
https://github.com/influxdata/telegraf/blob/master/plugins/inputs/docker/README.md
https://github.com/influxdata/telegraf/blob/master/plugins/inputs/kubernetes/README.md

Collect arbitrary metrics:

# tick/telegraf/deployment.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: tick
  name: telegraf-infra
data:
  telegraf.conf: |+
    [agent]
      interval = "10s"
      round_interval = true
      metric_batch_size = 1000
      metric_buffer_limit = 10000
      collection_jitter = "0s"
      flush_interval = "10s"
      flush_jitter = "0s"
      precision = ""
      debug = true
      quiet = false
      logfile = ""
      hostname = "telegraf-infra"
      omit_hostname = false
    [[outputs.influxdb]]
      urls = ["http://influxdb.tick.svc.cluster.local:8086"]
      database = "telegraf"
      retention_policy = "rp_90d"
      write_consistency = "any"
      timeout = "5s"
      username = ""
      password = ""
      user_agent = "telegraf"
      insecure_skip_verify = false
    [[inputs.http_listener]]
      service_address = ":8186"
    [[inputs.socket_listener]]
      service_address = "udp://:8092"
      data_format = "influx"
    [[inputs.redis]]
      servers = ["tcp://redis-cache.default.svc.cluster.local", "tcp://redis-broker.default.svc.cluster.local"]
    [[inputs.mongodb]]
      servers = ["mongodb://mongodb.default.svc.cluster.local"]
      gather_perdb_stats = true
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: tick
  name: telegraf-infra
spec:
  replicas: 1
  selector:
    matchLabels:
      app: telegraf
      type: infra
  template:
    metadata:
      labels:
        app: telegraf
        type: infra
    spec:
      containers:
      - name: telegraf
        image: telegraf:1.5.3-alpine
        ports:
        - name: udp
          protocol: UDP
          containerPort: 8092
        - name: http
          containerPort: 8186
        volumeMounts:
        - name: config
          mountPath: /etc/telegraf
        resources:
          requests:
            cpu: 50m
            memory: 500Mi
          limits:
            cpu: 500m
            memory: 500Mi
      volumes:
      - name: config
        configMap:
          name: telegraf-infra

ref:
https://github.com/influxdata/telegraf/blob/master/plugins/inputs/exec/README.md
https://github.com/influxdata/telegraf/blob/master/plugins/inputs/mongodb/README.md
https://github.com/influxdata/telegraf/blob/master/plugins/inputs/redis/README.md

$ kubectl apply -f tick/telegraf/ -R

Furthermore, you might also want to parse stdout of all containers, in that case, you could use logparser with DaemonSet:

# telegraf.conf
[agent]
    interval = "1s"
    round_interval = true
    metric_batch_size = 1000
    metric_buffer_limit = 10000
    collection_jitter = "0s"
    flush_interval = "10s"
    flush_jitter = "0s"
    precision = ""
    debug = true
    quiet = false
    logfile = ""
    hostname = "$HOSTNAME"
    omit_hostname = false
[[outputs.file]]
  files = ["stdout"]
[[outputs.influxdb]]
    urls = ["http://influxdb.tick.svc.cluster.local:8086"]
    database = "your_db"
    retention_policy = "rp_90d"
    write_consistency = "any"
    timeout = "5s"
    username = ""
    password = ""
    user_agent = "telegraf"
    insecure_skip_verify = false
    namepass = ["logparser_*"]
[[inputs.logparser]]
    name_override = "logparser_api"
    files = ["/var/log/containers/api*.log"]
    from_beginning = false
    [inputs.logparser.grok]
    measurement = "api_access_log"
    patterns = ["bytes\\} \\[%{DATA:timestamp:ts-ansic}\\] %{WORD:request_method} %{URIPATH:request_path}%{DATA:request_params:drop} =\\\\u003e generated %{NUMBER:response_bytes:int} bytes in %{NUMBER:response_time_ms:int} msecs \\(HTTP/1.1 %{RESPONSE_CODE}"]
[[inputs.logparser]]
    name_override = "logparser_worker"
    files = ["/var/log/containers/worker*.log"]
    from_beginning = false
    [inputs.logparser.grok]
    measurement = "worker_task_log"
    patterns = ['''\[%{TIMESTAMP_ISO8601:timestamp:ts-"2006-01-02 15:04:05"},%{WORD:value1:drop}: %{LOGLEVEL:loglevel:tag}\/MainProcess\] Task %{PROG:task_name:tag}\[%{UUID:task_id:drop}\] %{WORD:execution_status:tag} in %{DURATION:execution_time:duration}''']

ref:
https://github.com/influxdata/telegraf/tree/master/plugins/inputs/logparser
https://github.com/influxdata/telegraf/blob/master/plugins/inputs/logparser/grok/patterns/influx-patterns
https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns

Debug telegraf.conf

$ docker run \
-v $PWD/telegraf.conf:/etc/telegraf/telegraf.conf \
-v $PWD/api-1.log:/var/log/containers/api-1.log \
-v $PWD/worker-1.log:/var/log/containers/worker-1.log \
telegraf

ref:
https://grokdebug.herokuapp.com/

Usage

from telegraf.client import TelegrafClient

client = TelegrafClient(host='telegraf.tick.svc.cluster.local', port=8092)
client.metric('some_measurement', {'value_a': 100, 'value_b': 0, 'value_c': True}, tags={'country': 'taiwan'})

ref:
https://github.com/paksu/pytelegraf

Kapacitor

Kapacitor is so-called a real-time streaming data processing engine, basically, you would use it to trigger alerts.

ref:
https://www.influxdata.com/time-series-platform/kapacitor/

Deployment

# tick/kapacitor/service.yaml
apiVersion: v1
kind: Service
metadata:
  namespace: tick
  name: kapacitor-ss
spec:
  clusterIP: None
  selector:
    app: kapacitor
  ports:
  - name: api
    port: 9092
    targetPort: api
---
apiVersion: v1
kind: Service
metadata:
  namespace: tick
  name: kapacitor
spec:
  type: ClusterIP
  selector:
    app: kapacitor
  ports:
  - name: api
    port: 9092
    targetPort: api
# tick/kapacitor/statefulset.yaml


apiVersion: apps/v1
kind: StatefulSet
metadata:
  namespace: tick
  name: kapacitor
spec:
  replicas: 1
  serviceName: kapacitor-ss
  selector:
    matchLabels:
      app: kapacitor
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: ["ReadWriteOnce"]
      storageClassName: hdd-ext4
      resources:
        requests:
          storage: 10Gi
  template:
    metadata:
      labels:
        app: kapacitor
    spec:
      containers:
      - name: kapacitor
        image: kapacitor:1.4.1-alpine
        env:
        - name: KAPACITOR_HOSTNAME
          value: kapacitor
        - name: KAPACITOR_INFLUXDB_0_URLS_0
          value: http://influxdb.tick.svc.cluster.local:8086
        ports:
        - name: api
          containerPort: 9092
        volumeMounts:
        - name: data
          mountPath: /var/lib/kapacitor
        resources:
          requests:
            cpu: 50m
            memory: 500Mi
          limits:
            cpu: 500m
            memory: 500Mi

ref:
https://docs.influxdata.com/kapacitor/v1.4/

$ kubectl apply -f tick/kapacitor/ -R

Chronograf

Chronograf is the Web UI for TICK stack.

ref:
https://www.influxdata.com/time-series-platform/chronograf/

Deployment

# tick/chronograf/service.yaml
apiVersion: v1
kind: Service
metadata:
  namespace: tick
  name: chronograf-ss
spec:
  clusterIP: None
  selector:
    app: chronograf
---
apiVersion: v1
kind: Service
metadata:
  namespace: tick
  name: chronograf
spec:
  selector:
    app: chronograf
  ports:
  - name: api
    port: 80
    targetPort: api
# tick/chronograf/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  namespace: tick
  name: chronograf
spec:
  replicas: 1
  serviceName: chronograf-ss
  selector:
    matchLabels:
      app: chronograf
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: ["ReadWriteOnce"]
      storageClassName: hdd-ext4
      resources:
        requests:
          storage: 10Gi
  template:
    metadata:
      labels:
        app: chronograf
    spec:
      containers:
      - name: chronograf
        image: chronograf:1.4.4.0-alpine
        command: ["chronograf"]
        args: ["--influxdb-url=http://influxdb.tick.svc.cluster.local:8086", "--kapacitor-url=http://kapacitor.tick.svc.cluster.local:9092"]
        ports:
        - name: api
          containerPort: 8888
        livenessProbe:
          httpGet:
            path: /ping
            port: api
        readinessProbe:
          httpGet:
            path: /ping
            port: api
        volumeMounts:
        - name: data
          mountPath: /var/lib/chronograf
        resources:
          requests:
            cpu: 100m
            memory: 1000Mi
          limits:
            cpu: 2000m
            memory: 1000Mi

ref:
https://docs.influxdata.com/chronograf/v1.4/

$ kubectl apply -f tick/chronograf/ -R
$ kubectl port-forward svc/chronograf 8888:80 --namespace tick

ref:
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/