{"id":609,"date":"2020-07-26T03:03:25","date_gmt":"2020-07-25T19:03:25","guid":{"rendered":"https:\/\/vinta.ws\/code\/?p=609"},"modified":"2026-03-17T01:19:52","modified_gmt":"2026-03-16T17:19:52","slug":"the-complete-guide-to-google-kubernetes-engine-gke","status":"publish","type":"post","link":"https:\/\/vinta.ws\/code\/the-complete-guide-to-google-kubernetes-engine-gke.html","title":{"rendered":"The Incomplete Guide to Google Kubernetes Engine"},"content":{"rendered":"<p>Kubernetes is the de facto standard of container orchestration (deploying workloads on distributed systems). Google Kubernetes Engine (GKE) is the managed Kubernetes as a Service provided by Google Cloud Platform.<\/p>\n<p>Currently, GKE is still your best choice compared to other managed Kubernetes services, i.e., Azure Container Service (AKS) and Amazon Elastic Container Service for Kubernetes (EKS).<\/p>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/\">https:\/\/kubernetes.io\/<\/a><br \/>\n<a href=\"https:\/\/cloud.google.com\/kubernetes-engine\/\">https:\/\/cloud.google.com\/kubernetes-engine\/<\/a><\/p>\n<p>You could find the sample project on GitHub.<br \/>\n<a href=\"https:\/\/github.com\/vinta\/simple-project-on-k8s\">https:\/\/github.com\/vinta\/simple-project-on-k8s<\/a><\/p>\n<h2>Installation<\/h2>\n<p>Install <code>gcloud<\/code> to create Kubernetes clusters on Google Cloud Platform.<\/p>\n<p>Install <code>kubectl<\/code> to interact with any Kubernetes cluster.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ brew install kubernetes-cli\n# or\n$ gcloud components install kubectl\n$ gcloud components update<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/cloud.google.com\/sdk\/docs\/\">https:\/\/cloud.google.com\/sdk\/docs\/<\/a><br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/tasks\/tools\/install-kubectl\/\">https:\/\/kubernetes.io\/docs\/tasks\/tools\/install-kubectl\/<\/a><\/p>\n<p>Some useful tools:<\/p>\n<ul>\n<li><code>fubectl<\/code>: <a href=\"https:\/\/github.com\/kubermatic\/fubectl\">https:\/\/github.com\/kubermatic\/fubectl<\/a><\/li>\n<li><code>k9s<\/code>: <a href=\"https:\/\/github.com\/derailed\/k9s\">https:\/\/github.com\/derailed\/k9s<\/a><\/li>\n<li><code>stern<\/code>: <a href=\"https:\/\/github.com\/wercker\/stern\">https:\/\/github.com\/wercker\/stern<\/a><\/li>\n<li><code>zsh-kubectl-prompt<\/code>: <a href=\"https:\/\/github.com\/superbrothers\/zsh-kubectl-prompt\">https:\/\/github.com\/superbrothers\/zsh-kubectl-prompt<\/a><\/li>\n<\/ul>\n<h2>Concepts<\/h2>\n<h3>Nodes<\/h3>\n<ul>\n<li>Cluster: A set of machines, called nodes, that run containerized applications.<\/li>\n<li>Node: A single virtual or physical machine that provides hardware resources.<\/li>\n<li>Edge Node: The node which is exposed to the Internet.<\/li>\n<li>Master Node: The node which is responsible for managing the whole cluster.<\/li>\n<\/ul>\n<h3>Objects<\/h3>\n<ul>\n<li>Pod: A group of tightly related containers. Each pod is like a logical host that has its own IP, hostname, and storage.<\/li>\n<li>PodPreset: A set of pre-defined configurations that can be injected into Pods automatically.<\/li>\n<li>Service: A load balancer of a set of Pods which selected by labels, also called Service Discovery.<\/li>\n<li>Ingress: A revered proxy acts as an entry point to the cluster, which allows domain-based and path-based routing to different Services.<\/li>\n<li>ConfigMap: Key-value configuration data that can be mounted into containers or consumed as environment variables.<\/li>\n<li>Secret: Similar to ConfigMap but for storing sensitive data only.<\/li>\n<li>Volume: An ephemeral file system whose lifetime is the same as the Pod.<\/li>\n<li>PersistentVolume: A persistent file system that can be mounted to the cluster, without being associated with any particular node.<\/li>\n<li>PersistentVolumeClaim: A binding between a Pod and a PersistentVolume.<\/li>\n<li>StorageClass: A storage provisioner which allows users to request storages dynamically.<\/li>\n<li>Namespace: The way to partition a single cluster into multiple virtual groups.<\/li>\n<\/ul>\n<h3>Controllers<\/h3>\n<ul>\n<li>ReplicationController: Ensures that a specified number of Pods are always running.<\/li>\n<li>ReplicaSet: The next-generation ReplicationController.<\/li>\n<li>Deployment: The recommended way to deploy stateless Pods.<\/li>\n<li>StatefulSet: Similar to Deployment but provides guarantees about the ordering and unique names of Pods.<\/li>\n<li>DaemonSet: Ensures a copy of a Pod is running on every node.<\/li>\n<li>Job: Creates Pods that run to completion (exit with 0).<\/li>\n<li>CronJob: A Job which can run at a specific time or run regularly.<\/li>\n<li>HorizontalPodAutoscaler: Automatically scales the number of Pods based on CPU and memory utilization or custom metric targets.<\/li>\n<\/ul>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/concepts\/\">https:\/\/kubernetes.io\/docs\/concepts\/<\/a><br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/glossary\/?all=true\">https:\/\/kubernetes.io\/docs\/reference\/glossary\/?all=true<\/a><\/p>\n<h2>Setup Google Cloud Accounts<\/h2>\n<p>Make sure you use the right Google Cloud Platform account.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ gcloud init\n# or\n$ gcloud config configurations list\n$ gcloud config configurations activate default\n$ gcloud config set project simple-project-198818\n$ gcloud config set compute\/region asia-east1\n$ gcloud config set compute\/zone asia-east1-a\n$ gcloud config list<\/code><\/pre>\n<h2>Create Clusters<\/h2>\n<p>Create a regional cluster in <code>asia-east1<\/code> region which has 1 node in each of the <code>asia-east1<\/code> zones using <code>--region=asia-east1 --num-nodes=1<\/code>. By default, a cluster only creates its cluster master and nodes in a single compute zone.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-console\"># show available OSs and versions of Kubernetes\n$ gcloud container get-server-config\n\n# show available CPU platforms in the desired zone\n$ gcloud compute zones describe asia-east1-a\navailableCpuPlatforms:\n- Intel Skylake\n- Intel Broadwell\n- Intel Haswell\n- Intel Ivy Bridge\n\n$ gcloud container clusters create demo \n--cluster-version=1.11.6-gke.6 \n--node-version=1.11.6-gke.6 \n--scopes=gke-default,cloud-platform,storage-full,compute-ro,pubsub,https:\/\/www.googleapis.com\/auth\/cloud_debugger \n--region=asia-east1 \n--num-nodes=1 \n--enable-autoscaling --min-nodes=1 --max-nodes=10 \n--maintenance-window=20:00 \n--machine-type=n1-standard-4 \n--min-cpu-platform=\"Intel Skylake\" \n--enable-ip-alias \n--create-subnetwork=\"\" \n--image-type=UBUNTU \n--node-labels=custom.kubernetes.io\/fs-type=xfs\n\n$ gcloud container clusters describe demo --region=asia-east1\n\n$ kubectl version\nClient Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.3\", GitCommit:\"721bfa751924da8d1680787490c54b9179b1fed0\", GitTreeState:\"clean\", BuildDate:\"2019-02-04T04:48:55Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"darwin\/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"11+\", GitVersion:\"v1.11.5-gke.5\", GitCommit:\"9aba9c1237d9d2347bef28652b93b1cba3aca6d8\", GitTreeState:\"clean\", BuildDate:\"2018-12-11T02:36:50Z\", GoVersion:\"go1.10.3b4\", Compiler:\"gc\", Platform:\"linux\/amd64\"}\n\n$ kubectl get nodes -o wide<\/code><\/pre>\n<p>You can only get a regional cluster by creating a whole new cluster, Google currently won't allow you to turn an existing cluster into a regional one.<\/p>\n<p>ref:<br \/>\n<a href=\"https:\/\/cloud.google.com\/sdk\/gcloud\/reference\/container\/clusters\/create\">https:\/\/cloud.google.com\/sdk\/gcloud\/reference\/container\/clusters\/create<\/a><br \/>\n<a href=\"https:\/\/cloud.google.com\/compute\/docs\/machine-types\">https:\/\/cloud.google.com\/compute\/docs\/machine-types<\/a><br \/>\n<a href=\"https:\/\/cloud.google.com\/kubernetes-engine\/docs\/concepts\/regional-clusters\">https:\/\/cloud.google.com\/kubernetes-engine\/docs\/concepts\/regional-clusters<\/a><br \/>\n<a href=\"https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/min-cpu-platform\">https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/min-cpu-platform<\/a><br \/>\n<a href=\"https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/alias-ips\">https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/alias-ips<\/a><\/p>\n<p>Google Kubernetes Engine clusters running Kubernetes version 1.8+ enable Role-Based Access Control (RBAC) by default. Therefore, you must explicitly provide <code>--enable-legacy-authorization<\/code> option to disable RBAC.<\/p>\n<p>ref:<br \/>\n<a href=\"https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/role-based-access-control\">https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/role-based-access-control<\/a><\/p>\n<p>Delete the cluster. After you delete the cluster, you might also need to <strong>manually<\/strong> delete persistent disks (under Compute Engine), load balancers (under Network services) and static IPs (under VPC network) which belong to the cluster on Google Cloud Platform Console.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ gcloud container clusters delete demo --region=asia-east1<\/code><\/pre>\n<h2>Create Node Pools<\/h2>\n<p>Create a cluster with preemptible VMs which are much cheaper than regular instances using <code>--preemptible<\/code>.<\/p>\n<p>You might receive <code>The connection to the server x.x.x.x was refused - did you specify the right host or port?<\/code> error while upgrading the cluster which includes adding new node pools.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ gcloud container node-pools create n1-standard-4-pre \n--cluster=demo \n--node-version=1.11.6-gke.6 \n--scopes=gke-default,storage-full,compute-ro,pubsub,https:\/\/www.googleapis.com\/auth\/cloud_debugger \n--region=asia-east1 \n--num-nodes=1 \n--enable-autoscaling --min-nodes=1 --max-nodes=10 \n--machine-type=n1-standard-4 \n--min-cpu-platform=\"Intel Skylake\" \n--node-labels=custom.kubernetes.io\/scopes-storage-full=true\n--enable-autorepair \n--preemptible\n\n$ gcloud container node-pools list --cluster=demo --region=asia-east1\n\n$ gcloud container operations list<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/cloud.google.com\/sdk\/gcloud\/reference\/container\/node-pools\/create\">https:\/\/cloud.google.com\/sdk\/gcloud\/reference\/container\/node-pools\/create<\/a><br \/>\n<a href=\"https:\/\/cloud.google.com\/kubernetes-engine\/docs\/concepts\/preemptible-vm\">https:\/\/cloud.google.com\/kubernetes-engine\/docs\/concepts\/preemptible-vm<\/a><br \/>\n<a href=\"https:\/\/cloud.google.com\/compute\/docs\/regions-zones\/\">https:\/\/cloud.google.com\/compute\/docs\/regions-zones\/<\/a><\/p>\n<h2>Build Docker Images<\/h2>\n<p>You could use Google Cloud Build or any Continuous Integration (CI) service to automatically build Docker images and push them to Google Container Registry.<\/p>\n<p>Furthermore, you need to tag your Docker images appropriately with the registry name format: <code>region_name.gcr.io\/your_project_id\/your_image_name:version<\/code>.<\/p>\n<p>ref:<br \/>\n<a href=\"https:\/\/cloud.google.com\/container-builder\/\">https:\/\/cloud.google.com\/container-builder\/<\/a><br \/>\n<a href=\"https:\/\/cloud.google.com\/container-registry\/\">https:\/\/cloud.google.com\/container-registry\/<\/a><\/p>\n<p>An example of <code>cloudbuild.yaml<\/code>:<\/p>\n<pre class=\"line-numbers\"><code class=\"language-yaml\">substitutions:\n  _REPO_NAME: simple-api\nsteps:\n- id: pull-image\n  name: gcr.io\/cloud-builders\/docker\n  entrypoint: \"\/bin\/sh\"\n  args: [\n    \"-c\",\n    \"docker pull asia.gcr.io\/$PROJECT_ID\/$_REPO_NAME:$BRANCH_NAME || true\"\n  ]\n  waitFor: [\n    \"-\"\n  ]\n- id: build-image\n  name: gcr.io\/cloud-builders\/docker\n  args: [\n    \"build\",\n    \"--cache-from\", \"asia.gcr.io\/$PROJECT_ID\/$_REPO_NAME:$BRANCH_NAME\",\n    \"--label\", \"git.commit=$SHORT_SHA\",\n    \"--label\", \"git.branch=$BRANCH_NAME\",\n    \"--label\", \"ci.build-id=$BUILD_ID\",\n    \"-t\", \"asia.gcr.io\/$PROJECT_ID\/$_REPO_NAME:$SHORT_SHA\",\n    \"simple-api\/\"\n  ]\n  waitFor: [\n    \"pull-image\",\n  ]\nimages:\n  - asia.gcr.io\/$PROJECT_ID\/$_REPO_NAME:$SHORT_SHA<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/cloud.google.com\/container-builder\/docs\/build-config\">https:\/\/cloud.google.com\/container-builder\/docs\/build-config<\/a><br \/>\n<a href=\"https:\/\/cloud.google.com\/container-builder\/docs\/create-custom-build-steps\">https:\/\/cloud.google.com\/container-builder\/docs\/create-custom-build-steps<\/a><\/p>\n<p>Of course, you could also manually push Docker images to Google Container Registry.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ gcloud auth configure-docker &amp;&amp; \ngcloud config set project simple-project-198818 &amp;&amp; \nexport PROJECT_ID=\"$(gcloud config get-value project -q)\"\n\n$ docker build --rm -t asia.gcr.io\/${PROJECT_ID}\/simple-api:v1 simple-api\/\n\n$ gcloud docker -- push asia.gcr.io\/${PROJECT_ID}\/simple-api:v1\n\n$ gcloud container images list --repository=asia.gcr.io\/${PROJECT_ID}<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/cloud.google.com\/container-registry\/docs\/pushing-and-pulling\">https:\/\/cloud.google.com\/container-registry\/docs\/pushing-and-pulling<\/a><\/p>\n<p>Moreover, you should always adopt Multi-Stage builds for your Dockerfiles.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-Dockerfile\">FROM python:3.6.8-alpine3.7 AS builder\n\nENV PATH=$PATH:\/root\/.local\/bin\nENV PIP_DISABLE_PIP_VERSION_CHECK=1\n\nWORKDIR \/usr\/src\/app\/\n\nRUN apk add --no-cache --virtual .build-deps \n        build-base \n        linux-headers \n        openssl-dev \n        zlib-dev\n\nCOPY requirements.txt .\n\nRUN pip install --user -r requirements.txt &amp;&amp; \n    find $(python -m site --user-base) -type f -name \"*.pyc\" -delete &amp;&amp; \n    find $(python -m site --user-base) -type f -name \"*.pyo\" -delete &amp;&amp; \n    find $(python -m site --user-base) -type d -name \"__pycache__\" -delete\n\n###\n\nFROM python:3.6.8-alpine3.7\n\nENV PATH=$PATH:\/root\/.local\/bin\nENV FLASK_APP=app.py\n\nWORKDIR \/usr\/src\/app\/\n\nRUN apk add --no-cache --virtual .run-deps \n    ca-certificates \n    curl \n    openssl \n    zlib\n\nCOPY --from=builder \/root\/.local\/ \/root\/.local\/\nCOPY . .\n\nEXPOSE 8000\n\nCMD [\"uwsgi\", \"--ini\", \"config\/uwsgi.ini\", \"--single-interpreter\", \"--enable-threads\", \"--http\", \":8000\"]<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/medium.com\/@tonistiigi\/advanced-multi-stage-build-patterns-6f741b852fae\">https:\/\/medium.com\/@tonistiigi\/advanced-multi-stage-build-patterns-6f741b852fae<\/a><\/p>\n<h2>Create Pods<\/h2>\n<p>No, you should never create Pods directly which are so-called naked Pods. Use Deployment instead.<\/p>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-overview\/\">https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-overview\/<\/a><\/p>\n<p>Pods have the following life cycles (states):<\/p>\n<ul>\n<li>Pending<\/li>\n<li>Running<\/li>\n<li>Succeeded<\/li>\n<li>Failed<\/li>\n<li>Unknown<\/li>\n<\/ul>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle\/\">https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/pod-lifecycle\/<\/a><\/p>\n<h2>Inspect Pods<\/h2>\n<p>Show information about Pods.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ kubectl get all\n\n$ kubectl get deploy\n\n$ kubectl get pods\n$ kubectl get pods -l app=simple-api\n$ kubectl get pods\n\n$ kubectl describe pod simple-api-5bbf4dd4f9-8b4c9\n$ kubectl get pod simple-api-5bbf4dd4f9-8b4c9 -o yaml<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubectl\/kubectl-commands#describe\">https:\/\/kubernetes.io\/docs\/reference\/generated\/kubectl\/kubectl-commands#describe<\/a><br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubectl\/kubectl-commands#get\">https:\/\/kubernetes.io\/docs\/reference\/generated\/kubectl\/kubectl-commands#get<\/a><\/p>\n<p>Execute a command in a container.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ kubectl exec -i -t simple-api-5bbf4dd4f9-8b4c9 -- sh<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubectl\/kubectl-commands#exec\">https:\/\/kubernetes.io\/docs\/reference\/generated\/kubectl\/kubectl-commands#exec<\/a><\/p>\n<p>Tail Pod logs. It is also recommended to use <code>kubetail<\/code>.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ kubectl logs simple-api-5bbf4dd4f9-8b4c9 -f\n$ kubectl logs deploy\/simple-api -f\n$ kubectl logs statefulset\/mongodb-rs0 -f\n\n$ kubetail simple-api\n$ kubetail simple-worker\n$ kubetail mongodb-rs0 -c db<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubectl\/kubectl-commands#logs\">https:\/\/kubernetes.io\/docs\/reference\/generated\/kubectl\/kubectl-commands#logs<\/a><br \/>\n<a href=\"https:\/\/github.com\/johanhaleby\/kubetail\">https:\/\/github.com\/johanhaleby\/kubetail<\/a><\/p>\n<p>List all Pods on a certain node.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ kubectl describe node gke-demo-default-pool-fb33ac26-frkw\n...\nNon-terminated Pods:         (7 in total)\n  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits\n  ---------                  ----                                              ------------  ----------  ---------------  -------------\n  default                    mongodb-rs0-1                                     2100m (53%)   4 (102%)    4G (30%)         4G (30%)\n  default                    simple-api-84554476df-w5b5g                       500m (25%)    1 (51%)     1G (16%)         1G (16%)\n  default                    simple-worker-6495b6b74b-rqplv                    500m (25%)    1 (51%)     1G (16%)         1G (16%)\n  kube-system                fluentd-gcp-v3.0.0-848nq                          100m (2%)     0 (0%)      200Mi (1%)       300Mi (2%)\n  kube-system                heapster-v1.5.3-6447d67f78-7psb2                  138m (3%)     138m (3%)   301856Ki (2%)    301856Ki (2%)\n  kube-system                kube-dns-788979dc8f-5zvfk                         260m (6%)     0 (0%)      110Mi (0%)       170Mi (1%)\n  kube-system                kube-proxy-gke-demo-default-pool-3c058fcf-x7cv    100m (2%)     0 (0%)      0 (0%)           0 (0%)\n...\n\n$ kubectl get pods --all-namespaces -o wide --sort-by=\"{.spec.nodeName}\"<\/code><\/pre>\n<p>Check resource usage.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ kubectl top pods\n$ kubectl top nodes<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubectl\/kubectl-commands#top\">https:\/\/kubernetes.io\/docs\/reference\/generated\/kubectl\/kubectl-commands#top<\/a><br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/tasks\/debug-application-cluster\/\">https:\/\/kubernetes.io\/docs\/tasks\/debug-application-cluster\/<\/a><\/p>\n<p>Restart Pods.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-console\"># you could simply kill Pods which would restart automatically if your Pods are managed by any Deployment\n$ kubectl delete pods -l app=simple-worker\n\n# you could replace a resource by providing a manifest\n$ kubectl replace --force -f simple-api\/<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/stackoverflow.com\/questions\/40259178\/how-to-restart-kubernetes-pods\">https:\/\/stackoverflow.com\/questions\/40259178\/how-to-restart-kubernetes-pods<\/a><\/p>\n<p>Completely delete resources.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ kubectl delete -f simple-api\/ -R\n$ kubectl delete deploy simple-api\n$ kubectl delete deploy -l app=simple,role=worker\n\n# delete a Pod forcefully\n$ kubectl delete pod simple-api-668d465985-886h5 --grace-period=0 --force\n$ kubectl delete deploy simple-api --grace-period=0 --force\n\n# delete all resources under a namespace\n$ kubectl delete daemonsets,deployments,services,statefulset,pvc,pv --all --namespace tick<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubectl\/kubectl-commands#delete\">https:\/\/kubernetes.io\/docs\/reference\/generated\/kubectl\/kubectl-commands#delete<\/a><\/p>\n<h2>Create ConfigMaps<\/h2>\n<p>Create an environment-variable-like ConfigMap.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-yaml\">kind: ConfigMap\napiVersion: v1\nmetadata:\n  name: simple-api\ndata:\n  FLASK_ENV: production\n  MONGODB_URL: mongodb:\/\/mongodb-rs0-0.mongodb-rs0.default.svc.cluster.local,mongodb-rs0-1.mongodb-rs0.default.svc.cluster.local,mongodb-rs0-3.mongodb-rs0.default.svc.cluster.local\/demo?readPreference=secondaryPreferred&amp;maxPoolSize=10\n  CACHE_URL: redis:\/\/redis-cache.default.svc.cluster.local\/0\n  CELERY_BROKER_URL: redis:\/\/redis-broker.default.svc.cluster.local\/0\n  CELERY_RESULT_BACKEND: redis:\/\/redis-broker.default.svc.cluster.local\/1<\/code><\/pre>\n<p>Load environment variables from a ConfigMap:<\/p>\n<pre class=\"line-numbers\"><code class=\"language-yaml\">kind: Deployment\napiVersion: apps\/v1\nmetadata:\n  name: simple-api\n  labels:\n    app: simple-api\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: simple-api\n  template:\n    metadata:\n      labels:\n        app: simple-api\n    spec:\n      containers:\n      - name: simple-api\n        image: asia.gcr.io\/simple-project-198818\/simple-api:4fc4199\n        command: [\"uwsgi\", \"--ini\", \"config\/uwsgi.ini\", \"--single-interpreter\", \"--enable-threads\", \"--http\", \":8000\"]\n        envFrom:\n        - configMapRef:\n            name: simple-api\n        ports:\n        - containerPort: 8000<\/code><\/pre>\n<p>Create a file-like ConfigMap.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-yaml\">kind: ConfigMap\napiVersion: v1\nmetadata:\n  name: redis-cache\ndata:\n  redis.conf: |-\n    maxmemory-policy allkeys-lfu\n    appendonly no\n    save \"\"<\/code><\/pre>\n<p>Mount files from a ConfigMap:<\/p>\n<pre class=\"line-numbers\"><code class=\"language-yaml\">kind: Deployment\napiVersion: apps\/v1\nmetadata:\n  name: redis-cache\n  labels:\n    app: redis-cache\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: redis-cache\n  template:\n    metadata:\n      labels:\n        app: redis-cache\n    spec:\n      volumes:\n      - name: config\n        configMap:\n          name: redis-cache\n      containers:\n      - name: redis\n        image: redis:4.0.10-alpine\n        command: [\"redis-server\"]\n        args: [\"\/etc\/redis\/redis.conf\", \"--loglevel\", \"verbose\", \"--maxmemory\", \"1g\"]\n        volumeMounts:\n        - name: config\n          mountPath: \/etc\/redis\n        ports:\n        - containerPort: 6379<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-pod-configmap\/\">https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-pod-configmap\/<\/a><\/p>\n<p>Only mount a single file with <code>subPath<\/code>.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-yaml\">kind: Deployment\napiVersion: apps\/v1\nmetadata:\n  name: redis-cache\n  labels:\n    app: redis-cache\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: redis-cache\n  template:\n    metadata:\n      labels:\n        app: redis-cache\n    spec:\n      volumes:\n      - name: config\n        configMap:\n          name: redis-cache\n      containers:\n      - name: redis\n        image: redis:4.0.10-alpine\n        command: [\"redis-server\"]\n        args: [\"\/etc\/redis\/redis.conf\", \"--loglevel\", \"verbose\", \"--maxmemory\", \"1g\"]\n        volumeMounts:\n        - name: config\n          mountPath: \/etc\/redis\/redis.conf\n          subPath: redis.conf\n        ports:\n        - containerPort: 6379<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/github.com\/kubernetes\/kubernetes\/issues\/44815#issuecomment-297077509\">https:\/\/github.com\/kubernetes\/kubernetes\/issues\/44815#issuecomment-297077509<\/a><\/p>\n<p>It is worth noting that changing ConfigMap or Secret won't trigger re-deploying Deployment. A workaround might be changing the name of ConfigMap every time you change the content of ConfigMap. If you mount ConfigMap as environment variables, you must trigger a re-deployment explicitly.<\/p>\n<p>ref:<br \/>\n<a href=\"https:\/\/github.com\/kubernetes\/kubernetes\/issues\/22368\">https:\/\/github.com\/kubernetes\/kubernetes\/issues\/22368<\/a><\/p>\n<h2>Create Secrets<\/h2>\n<p>First of all, Secrets are only base64 encoded, <strong>not encrypted<\/strong>.<\/p>\n<p>Encode and decode a Secret value.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ echo -n 'YOUR_SECRET_KEY' | base64\nWU9VUl9TRUNSRVRfS0VZ\n\n$ echo 'WU9VUl9TRUNSRVRfS0VZ' | base64 --decode\nYOUR_SECRET_KEY<\/code><\/pre>\n<p>Create an environment-variable-like Secret.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-yaml\">kind: Secret\napiVersion: v1\nmetadata:\n  name: simple-api\ndata:\n  SECRET_KEY: WU9VUl9TRUNSRVRfS0VZ<\/code><\/pre>\n<p>Export data (base64-encoded) from a Secret.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ kubectl get secret simple-project-com --export=true -o yaml<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/concepts\/configuration\/secret\/\">https:\/\/kubernetes.io\/docs\/concepts\/configuration\/secret\/<\/a><\/p>\n<h2>Create Deployments With Probes<\/h2>\n<p>Deployments are designed for stateless (or nearly stateless) services. Deployment controls ReplicaSet and ReplicaSet controls Pod.<\/p>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/deployment\/\">https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/deployment\/<\/a><\/p>\n<p><code>livenessProbe<\/code> can be used to determine when an application must be restarted by Kubernetes, while <code>readinessProbe<\/code> can be used to determine when a container is ready to accept traffic.<\/p>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-liveness-readiness-probes\/\">https:\/\/kubernetes.io\/docs\/tasks\/configure-pod-container\/configure-liveness-readiness-probes\/<\/a><\/p>\n<p>It is also a best practice to always specify resource limits: <code>resources.requests<\/code> and <code>resources.limits<\/code>.<\/p>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-compute-resources-container\/\">https:\/\/kubernetes.io\/docs\/concepts\/configuration\/manage-compute-resources-container\/<\/a><\/p>\n<p>Create a Deployment with probes.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-yaml\">kind: Deployment\napiVersion: apps\/v1\nmetadata:\n  name: simple-api\n  labels:\n    app: simple-api\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: simple-api\n  template:\n    metadata:\n      labels:\n        app: simple-api\n    spec:\n      containers:\n      - name: simple-api\n        image: asia.gcr.io\/simple-project-198818\/simple-api:4fc4199\n        command: [\"uwsgi\", \"--ini\", \"config\/uwsgi.ini\", \"--single-interpreter\", \"--enable-threads\", \"--http\", \":8000\"]\n        envFrom:\n        - configMapRef:\n            name: simple-api\n        ports:\n        - containerPort: 8000\n        livenessProbe:\n          exec:\n            command: [\"curl\", \"-fsS\", \"-m\", \"0.1\", \"-H\", \"User-Agent: KubernetesHealthCheck\/1.0\", \"http:\/\/127.0.0.1:8000\/health\"]\n          initialDelaySeconds: 5\n          periodSeconds: 1\n          successThreshold: 1\n          failureThreshold: 5\n        readinessProbe:\n          exec:\n            command: [\"curl\", \"-fsS\", \"-m\", \"0.1\", \"-H\", \"User-Agent: KubernetesHealthCheck\/1.0\", \"http:\/\/127.0.0.1:8000\/health\"]\n          initialDelaySeconds: 3\n          periodSeconds: 1\n          successThreshold: 1\n          failureThreshold: 3\n        resources:\n          requests:\n            cpu: 500m\n            memory: 1G\n          limits:\n            cpu: 1000m\n            memory: 1G<\/code><\/pre>\n<p>Create another Deployment of Celery workers.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-yaml\">kind: Deployment\napiVersion: apps\/v1\nmetadata:\n  name: simple-worker\nspec:\n  replicas: 2\n  selector:\n    matchLabels:\n      app: simple-worker\n  template:\n    metadata:\n      labels:\n        app: simple-worker\n    spec:\n      terminationGracePeriodSeconds: 30\n      containers:\n      - name: simple-worker\n        image: asia.gcr.io\/simple-project-198818\/simple-api:4fc4199\n        command: [\"celery\", \"-A\", \"app:celery\", \"worker\", \"--without-gossip\", \"-Ofair\", \"-l\", \"info\"]\n        envFrom:\n        - configMapRef:\n            name: simple-api\n        readinessProbe:\n          exec:\n            command: [\"sh\", \"-c\", \"celery inspect -q -A app:celery -d celery@$(hostname) --timeout 10 ping\"]\n          initialDelaySeconds: 15\n          periodSeconds: 15\n          timeoutSeconds: 10\n          successThreshold: 1\n          failureThreshold: 3\n        resources:\n          requests:\n            cpu: 500m\n            memory: 1G\n          limits:\n            cpu: 1000m\n            memory: 1G<\/code><\/pre>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ kubectl apply -f simple-api\/ -R\n$ kubectl get pods<\/code><\/pre>\n<p>The minimum value of <code>timeoutSeconds<\/code> is <code>1<\/code> so that you might need to use <code>exec.command<\/code> to run arbitrary shell commands with custom timeout settings.<\/p>\n<p>ref:<br \/>\n<a href=\"https:\/\/cloudplatform.googleblog.com\/2018\/05\/Kubernetes-best-practices-Setting-up-health-checks-with-readiness-and-liveness-probes.html\">https:\/\/cloudplatform.googleblog.com\/2018\/05\/Kubernetes-best-practices-Setting-up-health-checks-with-readiness-and-liveness-probes.html<\/a><\/p>\n<h2>Create Deployments With InitContainers<\/h2>\n<p>If multiple Init Containers are specified for a Pod, those Containers are run one at a time in sequential order. Each must succeed before the next can run. When all of the Init Containers have run to completion, Kubernetes initializes regular containers as usual.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-yaml\">kind: Service\napiVersion: v1\nmetadata:\n  name: gcs-proxy-media-simple-project-com\nspec:\n  type: NodePort\n  selector:\n    app: gcs-proxy-media-simple-project-com\n  ports:\n    - name: http\n      port: 80\n      targetPort: 80\n---\nkind: ConfigMap\napiVersion: v1\nmetadata:\n  name: google-cloud-storage-proxy\ndata:\n  nginx.conf: |-\n    worker_processes auto;\n\n    http {\n      include mime.types;\n      default_type application\/octet-stream;\n\n      server {\n        listen 80;\n\n        if ( $http_user_agent ~* (GoogleHC|KubernetesHealthCheck) ) {\n          return 200;\n        }\n\n        root \/usr\/share\/nginx\/html;\n        open_file_cache max=10000 inactive=10m;\n        open_file_cache_valid 1m;\n        open_file_cache_min_uses 1;\n        open_file_cache_errors on;\n\n        include \/etc\/nginx\/conf.d\/*.conf;\n      }\n    }\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: gcs-proxy-media-simple-project-com\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: gcs-proxy-media-simple-project-com\n  template:\n    metadata:\n      labels:\n        app: gcs-proxy-media-simple-project-com\n    spec:\n      volumes:\n      - name: nginx-config\n        configMap:\n          name: google-cloud-storage-proxy\n      - name: nginx-config-extra\n        emptyDir: {}\n      initContainers:\n      - name: create-robots-txt\n        image: busybox\n        command: [\"sh\", \"-c\"]\n        args:\n        - |\n            set -euo pipefail\n            cat &lt;&lt; 'EOF' &gt; \/etc\/nginx\/conf.d\/robots.txt\n            User-agent: *\n            Disallow: \/\n            EOF\n        volumeMounts:\n        - name: nginx-config-extra\n          mountPath: \/etc\/nginx\/conf.d\/\n      - name: create-nginx-extra-conf\n        image: busybox\n        command: [\"sh\", \"-c\"]\n        args:\n        - |\n            set -euo pipefail\n            cat &lt;&lt; 'EOF' &gt; \/etc\/nginx\/conf.d\/extra.conf\n            location \/robots.txt {\n              alias \/etc\/nginx\/conf.d\/robots.txt;\n            }\n            EOF\n        volumeMounts:\n        - name: nginx-config-extra\n          mountPath: \/etc\/nginx\/conf.d\/\n      containers:\n      - name: http\n        image: swaglive\/openresty:gcsfuse\n        imagePullPolicy: Always\n        args: [\"nginx\", \"-c\", \"\/usr\/local\/openresty\/nginx\/conf\/nginx.conf\", \"-g\", \"daemon off;\"]\n        ports:\n        - containerPort: 80\n        securityContext:\n          privileged: true\n          capabilities:\n            add: [\"CAP_SYS_ADMIN\"]\n        env:\n          - name: GCSFUSE_OPTIONS\n            value: \"--debug_gcs --implicit-dirs --stat-cache-ttl 1s --type-cache-ttl 24h --limit-bytes-per-sec -1 --limit-ops-per-sec -1 -o ro,allow_other\"\n          - name: GOOGLE_CLOUD_STORAGE_BUCKET\n            value: asia.contents.simple-project.com\n        volumeMounts:\n        - name: nginx-config\n          mountPath: \/usr\/local\/openresty\/nginx\/conf\/nginx.conf\n          subPath: nginx.conf\n          readOnly: true\n        - name: nginx-config-extra\n          mountPath: \/etc\/nginx\/conf.d\/\n          readOnly: true\n        readinessProbe:\n          httpGet:\n            port: 80\n            path: \/\n            httpHeaders:\n            - name: User-Agent\n              value: \"KubernetesHealthCheck\/1.0\"\n          timeoutSeconds: 1\n          initialDelaySeconds: 5\n          periodSeconds: 5\n          failureThreshold: 1\n          successThreshold: 1\n        resources:\n          requests:\n            cpu: 0m\n            memory: 500Mi\n          limits:\n            cpu: 1000m\n            memory: 500Mi<\/code><\/pre>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ kubectl exec -i -t simple-api-5968cfc48d-8g755 -- sh                                                                                  (gke_simple-project-198818_asia-east1_demo\/default)\n&gt; curl http:\/\/gcs-proxy-media-simple-project-com\/robots.txt\nUser-agent: *\nDisallow: \/<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/init-containers\/\">https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/init-containers\/<\/a><br \/>\n<a href=\"https:\/\/blog.percy.io\/tuning-nginx-behind-google-cloud-platform-http-s-load-balancer-305982ddb340\">https:\/\/blog.percy.io\/tuning-nginx-behind-google-cloud-platform-http-s-load-balancer-305982ddb340<\/a><\/p>\n<h2>Create Deployments With Canary Deployment<\/h2>\n<p>TODO<\/p>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/concepts\/cluster-administration\/manage-deployment\/#canary-deployments\">https:\/\/kubernetes.io\/docs\/concepts\/cluster-administration\/manage-deployment\/#canary-deployments<\/a><br \/>\n<a href=\"https:\/\/medium.com\/google-cloud\/kubernetes-canary-deployments-for-mere-mortals-13728ce032fe\">https:\/\/medium.com\/google-cloud\/kubernetes-canary-deployments-for-mere-mortals-13728ce032fe<\/a><\/p>\n<h2>Rollback A Deployment<\/h2>\n<p>Yes, you could publish a deployment with <code>kubectl apply --record<\/code> and rollback it with <code>kubectl rollout undo<\/code>. However, the simplest way might be just <code>git checkout<\/code> the previous commit and deploy again with <code>kubectl apply<\/code>.<\/p>\n<p>The formal way.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ kubectl apply -f simple-api\/ -R --record\n$ kubectl rollout history deploy\/simple-api\n$ kubectl rollout undo deploy\/simple-api --to-revision=2<\/code><\/pre>\n<p>The git way.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ git checkout b7ed8d5\n$ kubectl apply -f simple-api\/ -R\n$ kubectl get pods<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/deployment\/#rolling-back-a-deployment\">https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/deployment\/#rolling-back-a-deployment<\/a><\/p>\n<h2>Scale A Deployment<\/h2>\n<p>Simply increase the number of <code>spec.replicas<\/code> and deploy again.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ kubectl apply -f simple-api\/ -R\n# or\n$ kubectl scale --replicas=10 deploy\/simple-api\n\n$ kubectl get pods<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubectl\/kubectl-commands#scale\">https:\/\/kubernetes.io\/docs\/reference\/generated\/kubectl\/kubectl-commands#scale<\/a><br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/deployment\/#scaling-a-deployment\">https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/deployment\/#scaling-a-deployment<\/a><\/p>\n<h2>Create HorizontalPodAutoscalers (HPA)<\/h2>\n<p>The Horizontal Pod Autoscaler automatically scales the number of pods in a Deployment based on observed CPU utilization, memory usage, or custom metrics. Yes, HPA only applies to Deployments and ReplicationControllers.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-yaml\">kind: HorizontalPodAutoscaler\napiVersion: autoscaling\/v2beta1\nmetadata:\n  name: simple-api\nspec:\n  scaleTargetRef:\n    apiVersion: apps\/v1\n    kind: Deployment\n    name: simple-api\n  minReplicas: 2\n  maxReplicas: 20\n  metrics:\n  - type: Resource\n    resource:\n      name: cpu\n      targetAverageUtilization: 80\n  - type: Resource\n    resource:\n      name: memory\n      targetAverageValue: 800M\n---\nkind: HorizontalPodAutoscaler\napiVersion: autoscaling\/v2beta1\nmetadata:\n  name: simple-worker\nspec:\n  scaleTargetRef:\n    apiVersion: apps\/v1\n    kind: Deployment\n    name: simple-worker\n  minReplicas: 2\n  maxReplicas: 10\n  metrics:\n  - type: Resource\n    resource:\n      name: cpu\n      targetAverageUtilization: 80\n  - type: Resource\n    resource:\n      name: memory\n      targetAverageValue: 500M<\/code><\/pre>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ kubectl apply -f simple-api\/hpa.yaml\n\n$ kubectl get hpa --watch\nNAME            REFERENCE                  TARGETS                   MINPODS   MAXPODS   REPLICAS   AGE\nsimple-api      Deployment\/simple-api      18685952\/800M, 4%\/80%     2         20        3          10m\nsimple-worker   Deployment\/simple-worker   122834944\/500M, 11%\/80%   2         10        3          10m<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/tasks\/run-application\/horizontal-pod-autoscale\/\">https:\/\/kubernetes.io\/docs\/tasks\/run-application\/horizontal-pod-autoscale\/<\/a><br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/tasks\/run-application\/horizontal-pod-autoscale-walkthrough\/\">https:\/\/kubernetes.io\/docs\/tasks\/run-application\/horizontal-pod-autoscale-walkthrough\/<\/a><\/p>\n<p>You could run some load testing.<\/p>\n<p>ref:<br \/>\n<a href=\"https:\/\/medium.com\/@jonbcampos\/kubernetes-horizontal-pod-scaling-190e95c258f5\">https:\/\/medium.com\/@jonbcampos\/kubernetes-horizontal-pod-scaling-190e95c258f5<\/a><\/p>\n<p>There is also Cluster Autoscaler in Google Kubernetes Engine.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ gcloud container clusters update demo \n--enable-autoscaling --min-nodes=1 --max-nodes=10 \n--node-pool=default-pool<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/cloud.google.com\/kubernetes-engine\/docs\/concepts\/cluster-autoscaler\">https:\/\/cloud.google.com\/kubernetes-engine\/docs\/concepts\/cluster-autoscaler<\/a><\/p>\n<h2>Create VerticalPodsAutoscalers (VPA)<\/h2>\n<p>TODO<\/p>\n<p>ref:<br \/>\n<a href=\"https:\/\/medium.com\/@Mohamed.ahmed\/kubernetes-autoscaling-101-cluster-autoscaler-horizontal-pod-autoscaler-and-vertical-pod-2a441d9ad231\">https:\/\/medium.com\/@Mohamed.ahmed\/kubernetes-autoscaling-101-cluster-autoscaler-horizontal-pod-autoscaler-and-vertical-pod-2a441d9ad231<\/a><\/p>\n<h2>Create PodDisruptionBudget (PDB)<\/h2>\n<ul>\n<li>Voluntary disruptions: actions initiated by application owners or admins.<\/li>\n<li>Involuntary disruptions: unavoidable cases like hardware failures or system software error.<\/li>\n<\/ul>\n<p>PodDisruptionBudgets are only accounted for with voluntary disruptions, something like a hardware failure will not take PodDisruptionBudget into account. PDB cannot prevent involuntary disruptions from occurring, but they do count against the budget.<\/p>\n<p>Create a PodDisruptionBudget for a stateless application.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-yaml\">kind: PodDisruptionBudget\napiVersion: policy\/v1beta1\nmetadata:\n  name: simple-api\nspec:\n  minAvailable: 90%\n  selector:\n    matchLabels:\n      app: simple-api<\/code><\/pre>\n<p>Create a PodDisruptionBudget for a multiple-instance stateful application.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-yaml\">kind: PodDisruptionBudget\napiVersion: policy\/v1beta1\nmetadata:\n  name: mongodb-rs0\nspec:\n  minAvailable: 2\n  selector:\n    matchLabels:\n      app: mongodb-rs0<\/code><\/pre>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ kubectl apply -f simple-api\/pdb.yaml\n$ kubectl apply -f mongodb\/pdb.yaml\n\n$ kubectl get pdb\nNAME          MIN AVAILABLE   MAX UNAVAILABLE   ALLOWED DISRUPTIONS   AGE\nmongodb-rs0   2               N\/A               1                     48m\nsimple-api    90%             N\/A               0                     48m<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/disruptions\/\">https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/disruptions\/<\/a><br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/tasks\/run-application\/configure-pdb\/\">https:\/\/kubernetes.io\/docs\/tasks\/run-application\/configure-pdb\/<\/a><\/p>\n<p>Actually, you could also have the similar functionality using <code>.spec.strategy.rollingUpdate<\/code>.<\/p>\n<ul>\n<li><code>maxUnavailable<\/code>: The maximum number of Pods that can be unavailable during the update process.<\/li>\n<li><code>maxSurge<\/code>: The maximum number of Pods that can be created over the desired number of Pods.<\/li>\n<\/ul>\n<p>Which makes sure that <code>total ready Pods &gt;= total desired Pods - maxUnavailable<\/code> and <code>total Pods &lt;= total desired Pods + maxSurge<\/code>.<\/p>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/deployment\/#writing-a-deployment-spec\">https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/deployment\/#writing-a-deployment-spec<\/a><br \/>\n<a href=\"https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/updating-apps\">https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/updating-apps<\/a><\/p>\n<h2>Create Services<\/h2>\n<p>A Service is basically a load balancer of a set of Pods which are selected by labels. Since you can't rely on any Pod's IP which changes every time it creates and destroys, you should always provide a Service as an entry point for your Pods or so-called Microservice.<\/p>\n<p>Typically, containers you run in the cluster are not accessible from the Internet, because they do not have external IP addresses. You must explicitly expose your application by creating a Service or an Ingress.<\/p>\n<p>There are the following Service types:<\/p>\n<ul>\n<li><code>ClusterIP<\/code>: A virtual IP which is only reachable from within the cluster. Also, the default Service type.<\/li>\n<li><code>NodePort<\/code>: It opens a specific port on all Nodes, and any traffic sent to the specific port on any node is forwarded to the Service.<\/li>\n<li><code>LoadBalancer<\/code>: It builds on <code>NodePorts<\/code> by additionally configuring the cloud provider to create an external load balancer.<\/li>\n<li><code>ExternalName<\/code>: It maps the service to an external CNAME record, i.e., your MySQL RDS on AWS.<\/li>\n<\/ul>\n<p>Create a Service.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-yaml\">kind: Service\napiVersion: v1\nmetadata:\n  name: simple-api\nspec:\n  type: NodePort\n  selector:\n    app: simple-api\n  ports:\n    - name: http\n      port: 80\n      targetPort: 8000<\/code><\/pre>\n<p><code>type: NodePorts<\/code> is enough in most cases; <code>spec.selector<\/code> must match labels defined in the corresponding Deployment as the same as <code>spec.ports.targetPort<\/code> and <code>spec.ports.protocol<\/code>.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ kubectl apply -f simple-api\/ -R\n\n$ kubectl get svc,endpoints\n\n$ kubespy trace service simple-api\n[ADDED v1\/Service] default\/simple-api\n[ADDED v1\/Endpoints] default\/simple-api\n    Directs traffic to the following live Pods:\n        - [Ready] simple-api-6b4b4c4bfb-g5dln @ 10.28.1.42\n        - [Ready] simple-api-6b4b4c4bfb-h66dg @ 10.28.8.24<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/service\/\">https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/service\/<\/a><br \/>\n<a href=\"https:\/\/medium.com\/google-cloud\/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0\">https:\/\/medium.com\/google-cloud\/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0<\/a><\/p>\n<p>After a Service is created, <code>kube-dns<\/code> creates a corresponding DNS A record named <code>your-service.your-namespace.svc.cluster.local<\/code> which resolves to an internal IP in the cluster. In ths case: <code>simple-api.default.svc.cluster.local<\/code>. Headless Services (without a cluster IP) are also assigned a DNS A record which has the same form. Unlike normal Services, this A record directly resolves to <strong>a set of IPs of Pods<\/strong> selected by the Service. Clients should be expected to consume the set of IPs or use round-robin selection from the set.<\/p>\n<p>You should always prefer DNS names of a Service over injected environment variables, e.g., <code>FOO_SERVICE_HOST<\/code> and <code>FOO_SERVICE_PORT<\/code>.<\/p>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/dns-pod-service\/\">https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/dns-pod-service\/<\/a><\/p>\n<p>For more detail about Kubernetes networking, go to:<br \/>\n<a href=\"https:\/\/github.com\/hackstoic\/kubernetes_practice\/blob\/master\/%E7%BD%91%E7%BB%9C.md\">https:\/\/github.com\/hackstoic\/kubernetes_practice\/blob\/master\/%E7%BD%91%E7%BB%9C.md<\/a><br \/>\n<a href=\"https:\/\/containerops.org\/2017\/01\/30\/kubernetes-services-and-ingress-under-x-ray\/\">https:\/\/containerops.org\/2017\/01\/30\/kubernetes-services-and-ingress-under-x-ray\/<\/a><br \/>\n<a href=\"https:\/\/www.safaribooksonline.com\/library\/view\/kubernetes-up-and\/9781491935668\/ch07.html\">https:\/\/www.safaribooksonline.com\/library\/view\/kubernetes-up-and\/9781491935668\/ch07.html<\/a><\/p>\n<h2>Configure Services With Google Cloud CDN<\/h2>\n<pre class=\"line-numbers\"><code class=\"language-yaml\">kind: BackendConfig\napiVersion: cloud.google.com\/v1beta1\nmetadata:\n  name: cdn\nspec:\n  cdn:\n    enabled: true\n    cachePolicy:\n      includeHost: false\n      includeProtocol: false\n      includeQueryString: false\n---\nkind: Service\napiVersion: v1\nmetadata:\n  name: gcs-proxy-media-simple-project-com\n  annotations:\n    beta.cloud.google.com\/backend-config: '{\"ports\": {\"http\":\"cdn\"}}'\n    cloud.google.com\/neg: '{\"ingress\": true}'\nspec:\n  selector:\n    app: gcs-proxy-media-simple-project-com\n  ports:\n    - name: http\n      port: 80\n      targetPort: 80<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/cloud.google.com\/kubernetes-engine\/docs\/concepts\/backendconfig\">https:\/\/cloud.google.com\/kubernetes-engine\/docs\/concepts\/backendconfig<\/a><\/p>\n<h2>Configure Services With Network Endpoint Groups (NEGs)<\/h2>\n<p>To use container-native load balancing, you must create a cluster with <code>--enable-ip-alias<\/code> flag, and just add an annotation to your Services. However, the load balancer is not created until you create an Ingress for the Service.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-yaml\">kind: Service\napiVersion: v1\nmetadata:\n  name: simple-api\n  annotations:\n    cloud.google.com\/neg: '{\"ingress\": true}'\nspec:\n  selector:\n    app: simple-api\n  ports:\n    - name: http\n      port: 80\n      targetPort: 8000<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/container-native-load-balancing\">https:\/\/cloud.google.com\/kubernetes-engine\/docs\/how-to\/container-native-load-balancing<\/a><\/p>\n<h2>Create An Internal Load Balancer<\/h2>\n<p>ref:<br \/>\n<a href=\"https:\/\/medium.com\/@johnjjung\/creating-an-inter-kubernetes-cluster-services-using-an-internal-loadbalancer-137f768bb3fc\">https:\/\/medium.com\/@johnjjung\/creating-an-inter-kubernetes-cluster-services-using-an-internal-loadbalancer-137f768bb3fc<\/a><\/p>\n<h2>Use Port Forwarding<\/h2>\n<p>Access a Service or a Pod on your local machine with port forwarding.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-console\"># 8080 is the local port and 80 is the remote port\n$ kubectl port-forward svc\/simple-api 8080:80\n\n# port forward to a Pod directly\n$ kubectl port-forward mongo-rs0-0 27017:27017\n\n$ open http:\/\/127.0.0.1:8080\/<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/tasks\/access-application-cluster\/port-forward-access-application-cluster\/\">https:\/\/kubernetes.io\/docs\/tasks\/access-application-cluster\/port-forward-access-application-cluster\/<\/a><\/p>\n<h2>Create An Ingress<\/h2>\n<p>Pods in Kubernetes are not reachable from outside the cluster, so you need a way to expose your Pods to the Internet. Even though you could associate Pods with a Service of the right type, i.e., <code>NodePort<\/code> or <code>LoadBalancer<\/code>, the recommended way to expose services is using Ingress. You can do a lot of different things with an Ingress, and there are many types of Ingress controllers that have different capabilities.<\/p>\n<p>There are some reasons to choose Ingress over Service:<\/p>\n<ul>\n<li>Service is internal load balancer and Ingress is a gateway of external access to Services<\/li>\n<li>Service is L3 load balancer and Ingress is L7 load balancer<\/li>\n<li>Ingress allows domain-based and path-based routing to different Services<\/li>\n<li>It is not efficient to create a cloud provider's load balancer for each Service you want to expose<\/li>\n<\/ul>\n<p>Create an Ingress which is implemented using Google Cloud Load Balancing (L7 HTTP load balancer). You should make sure Services exist before creating the Ingress.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-yaml\">kind: Ingress\napiVersion: extensions\/v1beta1\nmetadata:\n  name: simple-project\n  annotations:\n    kubernetes.io\/ingress.class: \"gce\"\n    # kubernetes.io\/tls-acme: \"true\"\n    # ingress.kubernetes.io\/ssl-redirect: \"true\"\nspec:\n  # tls:\n  # - secretName: simple-project-com-tls\n  #   hosts:\n  #   - simple-project.com\n  #   - www.simple-project.com\n  #   - api.simple-project.com\n  rules:\n  - host: simple-project.com\n    http:\n      paths:\n      - path: \/*\n        backend:\n          serviceName: simple-frontend\n          servicePort: 80\n  - host: www.simple-project.com\n    http:\n      paths:\n      - path: \/*\n        backend:\n          serviceName: simple-frontend\n          servicePort: 80\n  - host: api.simple-project.com\n    http:\n      paths:\n      - path: \/*\n        backend:\n          serviceName: simple-api\n          servicePort: 80\n  - host: asia.contents.simple-project.com\n    http:\n      paths:\n      - path: \/*\n        backend:\n          serviceName: gcs-proxy-media-simple-project-com\n          servicePort: 80\n  backend:\n    serviceName: simple-api\n    servicePort: 80<\/code><\/pre>\n<p>It might take several minutes to spin up a Google HTTP load balancer (includes acquiring the public IP), and at least 5 minutes before the GCE API starts healthchecking backends. After getting your public IP, you could go to your domain provider and create new DNS records which point to the IP.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ kubectl apply -f ingress.yaml\n\n$ kubectl describe ing simple-project<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/ingress\/\">https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/ingress\/<\/a><br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/tasks\/access-application-cluster\/create-external-load-balancer\/\">https:\/\/kubernetes.io\/docs\/tasks\/access-application-cluster\/create-external-load-balancer\/<\/a><br \/>\n<a href=\"https:\/\/www.joyfulbikeshedding.com\/blog\/2018-03-26-studying-the-kubernetes-ingress-system.html\">https:\/\/www.joyfulbikeshedding.com\/blog\/2018-03-26-studying-the-kubernetes-ingress-system.html<\/a><\/p>\n<p>To read more about Google Load balancer, go to:<br \/>\n<a href=\"https:\/\/cloud.google.com\/kubernetes-engine\/docs\/tutorials\/http-balancer\">https:\/\/cloud.google.com\/kubernetes-engine\/docs\/tutorials\/http-balancer<\/a><br \/>\n<a href=\"https:\/\/cloud.google.com\/compute\/docs\/load-balancing\/http\/backend-service\">https:\/\/cloud.google.com\/compute\/docs\/load-balancing\/http\/backend-service<\/a><\/p>\n<h2>Setup The Ingress With TLS Certificates<\/h2>\n<p>To automatically create HTTPS certificates for your domains:<\/p>\n<ul>\n<li><a href=\"https:\/\/vinta.ws\/code\/cert-manager-automatically-provision-tls-certificates-in-kubernetes.html\">https:\/\/vinta.ws\/code\/cert-manager-automatically-provision-tls-certificates-in-kubernetes.html<\/a><\/li>\n<li><a href=\"https:\/\/vinta.ws\/code\/kube-lego-automatically-provision-tls-certificates-in-kubernetes.html\">https:\/\/vinta.ws\/code\/kube-lego-automatically-provision-tls-certificates-in-kubernetes.html<\/a><\/li>\n<\/ul>\n<h2>Create Ingress Controllers<\/h2>\n<p>Kubernetes supports multiple Ingress controllers:<\/p>\n<ul>\n<li><a href=\"https:\/\/github.com\/kubernetes\/ingress-gce\">https:\/\/github.com\/kubernetes\/ingress-gce<\/a><\/li>\n<li><a href=\"https:\/\/github.com\/kubernetes\/ingress-nginx\">https:\/\/github.com\/kubernetes\/ingress-nginx<\/a><\/li>\n<li><a href=\"https:\/\/github.com\/nginxinc\/kubernetes-ingress\">https:\/\/github.com\/nginxinc\/kubernetes-ingress<\/a><\/li>\n<li><a href=\"https:\/\/github.com\/containous\/traefik\/\">https:\/\/github.com\/containous\/traefik\/<\/a><\/li>\n<\/ul>\n<p>ref:<br \/>\n<a href=\"https:\/\/container-solutions.com\/production-ready-ingress-kubernetes\/\">https:\/\/container-solutions.com\/production-ready-ingress-kubernetes\/<\/a><\/p>\n<h2>Create StorageClasses<\/h2>\n<p>StorageClass provides a way to define different available storage types, for instance, ext4 SSD, XFS SSD, CephFS, NFS. You could specify what you want in PersistentVolumeClaim or StatefulSet.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-yaml\">kind: StorageClass\napiVersion: storage.k8s.io\/v1\nmetadata:\n  name: ssd\nprovisioner: kubernetes.io\/gce-pd\nparameters:\n  type: pd-ssd\n---\nkind: StorageClass\napiVersion: storage.k8s.io\/v1\nmetadata:\n  name: ssd-xfs\nprovisioner: kubernetes.io\/gce-pd\nparameters:\n  type: pd-ssd\n  fsType: xfs\n---\nkind: StorageClass\napiVersion: storage.k8s.io\/v1\nmetadata:\n  name: ssd-regional\nprovisioner: kubernetes.io\/gce-pd\nparameters:\n  type: pd-ssd\n  zones: asia-east1-a, asia-east1-b, asia-east1-c\n  replication-type: regional-pd<\/code><\/pre>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ kubectl apply -f storageclass.yaml\n$ kubectl get sc\nNAME PROVISIONER AGE\nssd kubernetes.io\/gce-pd 5s\nssd-regional kubernetes.io\/gce-pd 4s\nssd-xfs kubernetes.io\/gce-pd 3s\nstandard (default) kubernetes.io\/gce-pd 1h<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/concepts\/storage\/storage-classes\/#gce\">https:\/\/kubernetes.io\/docs\/concepts\/storage\/storage-classes\/#gce<\/a><\/p>\n<h2>Create PersistentVolumeClaims<\/h2>\n<p>A Volume is just a directory which you could mount into containers and it is shared by all containers inside the same Pod. Also, it has an explicit lifetime - the same as the Pod that encloses it. Sources of Volume are various, they could be a remote Git repo, a file path of the host machine, a folder from a PersistentVolumeClaim, or data from a ConfigMap and a Secret.<\/p>\n<p>PersistentVolumes are used to manage durable storage in a cluster. Unlike Volumes, PersistentVolumes have a lifecycle independent of any individual Pod. On Google Kubernetes Engine, PersistentVolumes are typically backed by Google Compute Engine Persistent Disks. Typically, you don't have to create PersistentVolumes explicitly. In Kubernetes 1.6 and later versions, you only need to create PersistentVolumeClaim, and the corresponding PersistentVolume would be dynamically provisioned with StorageClasses. Pods use PersistentVolumeClaims as Volumes.<\/p>\n<p>Be careful creating a Deployment with PersistentVolumeClaim. In most cases, you might not want multiple replicas of a Deployment to write data into the same PersistentVolumeClaim.<\/p>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes\/\">https:\/\/kubernetes.io\/docs\/concepts\/storage\/volumes\/<\/a><br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/concepts\/storage\/persistent-volumes\/\">https:\/\/kubernetes.io\/docs\/concepts\/storage\/persistent-volumes\/<\/a><br \/>\n<a href=\"https:\/\/cloud.google.com\/kubernetes-engine\/docs\/concepts\/persistent-volumes\">https:\/\/cloud.google.com\/kubernetes-engine\/docs\/concepts\/persistent-volumes<\/a><\/p>\n<p>Also, IOPS is based on the disk size and node size. You need to claim a large disk size if you want high IOPS even if you only have very little disk usage.<\/p>\n<p>ref:<br \/>\n<a href=\"https:\/\/cloud.google.com\/compute\/docs\/disks\/performance\">https:\/\/cloud.google.com\/compute\/docs\/disks\/performance<\/a><\/p>\n<p>On Kubernetes v1.10+, it is possible to create local PersistentVolumes for your StatefulSets. Previously, PersistentVolumes only supported remote volume types, for instance, GCE's Persistent Disk and AWS's EBS. However, using local storage ties your applications to that specific node, making your application harder to schedule.<\/p>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/blog\/2018\/04\/13\/local-persistent-volumes-beta\/\">https:\/\/kubernetes.io\/blog\/2018\/04\/13\/local-persistent-volumes-beta\/<\/a><\/p>\n<h2>Create A StatefulSet<\/h2>\n<p>Pods created under a StatefulSet have a few unique attributes: the name of the pod is not random, instead each pod gets an ordinal name. In addition, Pods are created one at a time instead of all at once, which can help when bootstrapping a stateful system. StatefulSet also deletes\/updates one Pod at a time, in reverse order with respect to its ordinal index, and it waits for each to be completely shutdown before deleting the next.<\/p>\n<p>Rule of thumb: once you find out that you need PersistentVolume for the component, you might just consider using StatefulSet.<\/p>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/tutorials\/stateful-application\/basic-stateful-set\/\">https:\/\/kubernetes.io\/docs\/tutorials\/stateful-application\/basic-stateful-set\/<\/a><br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/statefulset\/\">https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/statefulset\/<\/a><br \/>\n<a href=\"https:\/\/akomljen.com\/kubernetes-persistent-volumes-with-deployment-and-statefulset\/\">https:\/\/akomljen.com\/kubernetes-persistent-volumes-with-deployment-and-statefulset\/<\/a><\/p>\n<p>Create a StatefulSet of a three-node MongoDB replica set.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-yaml\">kind: ClusterRoleBinding\napiVersion: rbac.authorization.k8s.io\/v1\nmetadata:\n  name: default-view\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: view\nsubjects:\n  - kind: ServiceAccount\n    name: default\n    namespace: default\n---\nkind: Service\napiVersion: v1\nmetadata:\n  name: mongodb-rs0\nspec:\n  clusterIP: None\n  selector:\n    app: mongodb-rs0\n  ports:\n    - port: 27017\n      targetPort: 27017\n---\nkind: StatefulSet\napiVersion: apps\/v1\nmetadata:\n  name: mongodb-rs0\nspec:\n  replicas: 3\n  updateStrategy:\n    type: RollingUpdate\n  serviceName: mongodb-rs0\n  selector:\n    matchLabels:\n      app: mongodb-rs0\n  volumeClaimTemplates:\n  - metadata:\n      name: data\n    spec:\n      accessModes: [\"ReadWriteOnce\"]\n      storageClassName: ssd-xfs\n      resources:\n        requests:\n          storage: 100G\n  template:\n    metadata:\n      labels:\n        app: mongodb-rs0\n    spec:\n      affinity:\n        nodeAffinity:\n          requiredDuringSchedulingIgnoredDuringExecution:\n            nodeSelectorTerms:\n            - matchExpressions:\n              - key: custom.kubernetes.io\/fs-type\n                operator: In\n                values:\n                - \"xfs\"\n              - key: cloud.google.com\/gke-preemptible\n                operator: NotIn\n                values:\n                - \"true\"\n        podAntiAffinity:\n          requiredDuringSchedulingIgnoredDuringExecution:\n            - topologyKey: \"kubernetes.io\/hostname\"\n              labelSelector:\n                matchExpressions:\n                  - key: \"app\"\n                    operator: In\n                    values:\n                    - mongodb-rs0\n      terminationGracePeriodSeconds: 10\n      containers:\n      - name: db\n        image: mongo:3.6.5\n        command: [\"mongod\"]\n        args: [\"--bind_ip_all\", \"--replSet\", \"rs0\"]\n        ports:\n        - containerPort: 27017\n        volumeMounts:\n        - name: data\n          mountPath: \/data\/db\n        readinessProbe:\n          exec:\n            command: [\"mongo\", --eval, \"db.adminCommand('ping')\"]\n        resources:\n          requests:\n            cpu: 2\n            memory: 4G\n          limits:\n            cpu: 4\n            memory: 4G\n      - name: sidecar\n        image: cvallance\/mongo-k8s-sidecar\n        env:\n          - name: MONGO_SIDECAR_POD_LABELS\n            value: app=mongodb-rs0\n          - name: KUBE_NAMESPACE\n            value: default\n          - name: KUBERNETES_MONGO_SERVICE_NAME\n            value: mongodb-rs0<\/code><\/pre>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ kubectl apply -f storageclass.yaml\n$ kubectl apply -f mongodb\/ -R\n\n$ kubectl get pods\n\n$ kubetail mongodb -c db\n$ kubetail mongodb -c sidecar\n\n$ kubectl scale statefulset mongodb-rs0 --replicas=4<\/code><\/pre>\n<p>The purpose of <code>cvallance\/mongo-k8s-sidecar<\/code> is to automatically add new Pods to the replica set and remove Pods from the replica set while you scale up or down MongoDB StatefulSet.<\/p>\n<p>ref:<br \/>\n<a href=\"https:\/\/github.com\/cvallance\/mongo-k8s-sidecar\">https:\/\/github.com\/cvallance\/mongo-k8s-sidecar<\/a><br \/>\n<a href=\"https:\/\/kubernetes.io\/blog\/2017\/01\/running-mongodb-on-kubernetes-with-statefulsets\/\">https:\/\/kubernetes.io\/blog\/2017\/01\/running-mongodb-on-kubernetes-with-statefulsets\/<\/a><br \/>\n<a href=\"https:\/\/medium.com\/@thakur.vaibhav23\/scaling-mongodb-on-kubernetes-32e446c16b82\">https:\/\/medium.com\/@thakur.vaibhav23\/scaling-mongodb-on-kubernetes-32e446c16b82<\/a><\/p>\n<h2>Create A Headless Service For A StatefulSet<\/h2>\n<p>Headless Services (<code>clusterIP: None<\/code>) are just like normal Kubernetes Services, except they don\u2019t do any load balancing for you. For a typical StatefulSet component, for instance, a database with Master-Slave replication, you don't want Kubernetes load balancing in order to prevent writing data to slaves accidentally.<\/p>\n<p>When headless Services combine with StatefulSets, they can give you unique DNS addresses which return A records that point directly to Pods themselves. DNS names are in the format of <code>static-pod-name.headless-service-name.namespace.svc.cluster.local<\/code>.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-yaml\">kind: Service\napiVersion: v1\nmetadata:\n  name: redis-broker\nspec:\n  clusterIP: None\n  selector:\n    app: redis-broker\n  ports:\n  - port: 6379\n    targetPort: 6379\n---\nkind: StatefulSet\napiVersion: apps\/v1\nmetadata:\n  name: redis-broker\nspec:\n  replicas: 1\n  serviceName: redis-broker\n  selector:\n    matchLabels:\n      app: redis-broker\n  volumeClaimTemplates:\n  - metadata:\n      name: data\n    spec:\n      accessModes: [\"ReadWriteOnce\"]\n      storageClassName: ssd\n      resources:\n        requests:\n          storage: 32Gi\n  template:\n    metadata:\n      labels:\n        app: redis-broker\n    spec:\n      affinity:\n        nodeAffinity:\n          requiredDuringSchedulingIgnoredDuringExecution:\n            nodeSelectorTerms:\n            - matchExpressions:\n              - key: cloud.google.com\/gke-preemptible\n                operator: NotIn\n                values:\n                - \"true\"\n      volumes:\n      - name: config\n        configMap:\n          name: redis-broker\n      containers:\n      - name: redis\n        image: redis:4.0.10-alpine\n        command: [\"redis-server\"]\n        args: [\"\/etc\/redis\/redis.conf\", \"--loglevel\", \"verbose\", \"--maxmemory\", \"1g\"]\n        ports:\n        - containerPort: 6379\n        volumeMounts:\n        - name: data\n          mountPath: \/data\n        - name: config\n          mountPath: \/etc\/redis\n        readinessProbe:\n          exec:\n            command: [\"sh\", \"-c\", \"redis-cli -h $(hostname) ping\"]\n          initialDelaySeconds: 5\n          timeoutSeconds: 1\n          periodSeconds: 1\n          successThreshold: 1\n          failureThreshold: 3\n        resources:\n          requests:\n            cpu: 250m\n            memory: 1G\n          limits:\n            cpu: 1000m\n            memory: 1G<\/code><\/pre>\n<p>If <code>redis-broker<\/code> has 2 replicas, <code>nslookup redis-broker.default.svc.cluster.local<\/code> returns multiple A records for a single DNS lookup is commonly known as round-robin DNS.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ kubectl run -i -t --image busybox dns-test --restart=Never --rm \/bin\/sh\n\n&gt; nslookup redis-broker.default.svc.cluster.local\nServer: 10.63.240.10\nAddress 1: 10.63.240.10 kube-dns.kube-system.svc.cluster.local\nName: redis-broker.default.svc.cluster.local\nAddress 1: 10.60.6.2 redis-broker-0.redis-broker.default.svc.cluster.local\nAddress 2: 10.60.6.7 redis-broker-1.redis-broker.default.svc.cluster.local\n\n&gt; nslookup redis-broker-0.redis-broker.default.svc.cluster.local\nServer: 10.63.240.10\nAddress 1: 10.63.240.10 kube-dns.kube-system.svc.cluster.local\nName: redis-broker-0.redis-broker.default\nAddress 1: 10.60.6.2 redis-broker-0.redis-broker.default.svc.cluster.local<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/service\/#headless-services\">https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/service\/#headless-services<\/a><br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/dns-pod-service\/#services\">https:\/\/kubernetes.io\/docs\/concepts\/services-networking\/dns-pod-service\/#services<\/a><br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/tutorials\/stateful-application\/basic-stateful-set\/#using-stable-network-identities\">https:\/\/kubernetes.io\/docs\/tutorials\/stateful-application\/basic-stateful-set\/#using-stable-network-identities<\/a><\/p>\n<p>Moreover, there is no port re-mapping for a headless Service due to the IP resolves to Pod directly.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-yaml\">kind: Service\napiVersion: v1\nmetadata:\n  namespace: tick\n  name: influxdb\nspec:\n  clusterIP: None\n  selector:\n    app: influxdb\n  ports:\n  - name: api\n    port: 4444\n    targetPort: 8086\n  - name: admin\n    port: 8083\n    targetPort: 8083<\/code><\/pre>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ kubectl apply -f tick\/ -R\n$ kubectl get svc --namespace tick\nNAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)             AGE\ninfluxdb   ClusterIP   None         &lt;none&gt;        4444\/TCP,8083\/TCP   1h\n\n$ curl http:\/\/influxdb.tick.svc.cluster.local:4444\/ping\ncurl: (7) Failed to connect to influxdb.tick.svc.cluster.local port 4444: Connection refused\n\n$ curl -I http:\/\/influxdb.tick.svc.cluster.local:8086\/ping\nHTTP\/1.1 204 No Content\nContent-Type: application\/json\nRequest-Id: 7fc09a56-8538-11e8-8d1d-000000000000<\/code><\/pre>\n<h2>Create A DaemonSet<\/h2>\n<p>Create a DaemonSet which changes OS kernel configurations on each node.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-yaml\">kind: DaemonSet\napiVersion: apps\/v1\nmetadata:\n  name: thp-disabler\nspec:\n  selector:\n    matchLabels:\n      app: thp-disabler\n  template:\n    metadata:\n      labels:\n        app: thp-disabler\n    spec:\n      hostPID: true\n      containers:\n      - name: configurer\n        image: gcr.io\/google-containers\/startup-script:v1\n        securityContext:\n          privileged: true\n        env:\n        - name: STARTUP_SCRIPT\n          value: |\n            #! \/bin\/bash\n            set -o errexit\n            set -o pipefail\n            set -o nounset\n\n            echo 'never' &gt; \/sys\/kernel\/mm\/transparent_hugepage\/enabled\n            echo 'never' &gt; \/sys\/kernel\/mm\/transparent_hugepage\/defrag<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/daemonset\/\">https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/daemonset\/<\/a><\/p>\n<h2>Create A CronJob<\/h2>\n<p>Backup your MongoDB database every hour.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-yaml\">kind: CronJob\napiVersion: batch\/v1beta1\nmetadata:\n  name: backup-mongodb-rs0\nspec:\n  suspend: false\n  schedule: \"30 * * * *\"\n  startingDeadlineSeconds: 600\n  jobTemplate:\n    spec:\n      template:\n        spec:\n          restartPolicy: OnFailure\n          affinity:\n            nodeAffinity:\n              requiredDuringSchedulingIgnoredDuringExecution:\n                nodeSelectorTerms:\n                - matchExpressions:\n                  - key: custom.kubernetes.io\/scopes-storage-full\n                    operator: In\n                    values:\n                    - \"true\"\n          volumes:\n          - name: backups-dir\n            emptyDir: {}\n          initContainers:\n          - name: clean\n            image: busybox\n            command: [\"rm\", \"-rf\", \"\/backups\/*\"]\n            volumeMounts:\n            - name: backups-dir\n              mountPath: \/backups\n          - name: backup\n            image: vinta\/mongodb-tools:4.0.1\n            workingDir: \/backups\n            command: [\"sh\", \"-c\"]\n            args:\n            - mongodump --host=$MONGODB_URL --readPreference=secondaryPreferred --oplog --gzip --archive=$(date +%Y-%m-%dT%H-%M-%S).tar.gz\n            env:\n            - name: MONGODB_URL\n              value: mongodb-rs0-0.mongodb-rs0.default.svc.cluster.local,mongodb-rs0-1.mongodb-rs0.default.svc.cluster.local,mongodb-rs0-3.mongodb-rs0.default.svc.cluster.local\n            volumeMounts:\n            - name: backups-dir\n              mountPath: \/backups\n            resources:\n              requests:\n                cpu: 2\n                memory: 2G\n          containers:\n          - name: upload\n            image: google\/cloud-sdk:alpine\n            workingDir: \/backups\n            command: [\"sh\", \"-c\"]\n            args:\n            - gsutil -m cp -r . gs:\/\/$(GOOGLE_CLOUD_STORAGE_BUCKET)\n            env:\n            - name: GOOGLE_CLOUD_STORAGE_BUCKET\n              value: simple-project-backups\n            volumeMounts:\n            - name: backups-dir\n              mountPath: \/backups\n              readOnly: true<\/code><\/pre>\n<p>Note: The environment variable appears in parentheses, <code>$(VAR)<\/code>, and it is required for the variable to be expanded in the <code>command<\/code> or <code>args<\/code> field.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-yaml\">apiVersion: batch\/v1beta1\nkind: CronJob\nmetadata:\n  name: simple-api-send-email\nspec:\n  schedule: \"*\/30 * * * *\"\n  concurrencyPolicy: Forbid\n  jobTemplate:\n    spec:\n      template:\n        spec:\n          restartPolicy: Never\n          containers:\n          - name: simple-api-send-email\n            image: asia.gcr.io\/simple-project-198818\/simple-api:4fc4199\n            command: [\"flask\", \"shell\", \"-c\"]\n            args:\n            - |\n              from bar.tasks import send_email\n              send_email.delay('Hey!', 'Stand up!', to=['vinta.chen@gmail.com'])\n            envFrom:\n            - configMapRef:\n                name: simple-api<\/code><\/pre>\n<p>You could just write a simple Python script as a CronJob since everything is containerized.<\/p>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/tasks\/job\/automated-tasks-with-cron-jobs\/\">https:\/\/kubernetes.io\/docs\/tasks\/job\/automated-tasks-with-cron-jobs\/<\/a><\/p>\n<h2>Define NodeAffinity And PodAffinity<\/h2>\n<p>Prevent Pods from being located on preemptible nodes. Also, you should always prefer <code>nodeAffinity<\/code> over <code>nodeSelector<\/code>.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-yaml\">kind: StatefulSet\napiVersion: apps\/v1\nspec:\n  template:\n    spec:\n      affinity:\n        nodeAffinity:\n          requiredDuringSchedulingIgnoredDuringExecution:\n            nodeSelectorTerms:\n            - matchExpressions:\n              - key: cloud.google.com\/gke-preemptible\n                operator: NotIn\n                values:\n                - \"true\"<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/medium.com\/google-cloud\/using-preemptible-vms-to-cut-kubernetes-engine-bills-in-half-de2481b8e814\">https:\/\/medium.com\/google-cloud\/using-preemptible-vms-to-cut-kubernetes-engine-bills-in-half-de2481b8e814<\/a><\/p>\n<p><code>spec.PodAntiAffinity<\/code> ensures that each Pod of the same Deployment or StatefulSet does not co-locate on a single node.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-yaml\">kind: StatefulSet\napiVersion: apps\/v1\nspec:\n  template:\n    spec:\n      affinity:\n        podAntiAffinity:\n          requiredDuringSchedulingIgnoredDuringExecution:\n            - topologyKey: \"kubernetes.io\/hostname\"\n              labelSelector:\n                matchExpressions:\n                  - key: \"app\"\n                    operator: In\n                    values:\n                    - mongodb-rs0<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/concepts\/configuration\/assign-pod-node\/\">https:\/\/kubernetes.io\/docs\/concepts\/configuration\/assign-pod-node\/<\/a><\/p>\n<h2>Migrate Pods from Old Nodes to New Nodes<\/h2>\n<ul>\n<li>Cordon marks old nodes as unschedulable<\/li>\n<li>Drain evicts all Pods on old nodes<\/li>\n<\/ul>\n<pre class=\"line-numbers\"><code class=\"language-console\">for node in $(kubectl get nodes -l cloud.google.com\/gke-nodepool=n1-standard-4-pre -o=name); do\n  kubectl cordon \"$node\";\ndone\n\nfor node in $(kubectl get nodes -l cloud.google.com\/gke-nodepool=n1-standard-4-pre -o=name); do\n  kubectl drain --ignore-daemonsets --delete-local-data --grace-period=2 \"$node\";\ndone\n\n$ kubectl get nodes\nNAME                                       STATUS                     ROLES     AGE       VERSION\ngke-demo-default-pool-3c058fcf-x7cv        Ready                      &lt;none&gt;    2h        v1.11.6-gke.6\ngke-demo-default-pool-58da1098-1h00        Ready                      &lt;none&gt;    2h        v1.11.6-gke.6\ngke-demo-default-pool-fc34abbf-9dwr        Ready                      &lt;none&gt;    2h        v1.11.6-gke.6\ngke-demo-n1-standard-4-pre-1a54e45a-0m7p   Ready,SchedulingDisabled   &lt;none&gt;    58m       v1.11.6-gke.6\ngke-demo-n1-standard-4-pre-1a54e45a-mx3h   Ready,SchedulingDisabled   &lt;none&gt;    58m       v1.11.6-gke.6\ngke-demo-n1-standard-4-pre-1a54e45a-qhdz   Ready,SchedulingDisabled   &lt;none&gt;    58m       v1.11.6-gke.6<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubectl\/kubectl-commands#cordon\">https:\/\/kubernetes.io\/docs\/reference\/generated\/kubectl\/kubectl-commands#cordon<\/a><br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/reference\/generated\/kubectl\/kubectl-commands#drain\">https:\/\/kubernetes.io\/docs\/reference\/generated\/kubectl\/kubectl-commands#drain<\/a><br \/>\n<a href=\"https:\/\/cloud.google.com\/kubernetes-engine\/docs\/tutorials\/migrating-node-pool\">https:\/\/cloud.google.com\/kubernetes-engine\/docs\/tutorials\/migrating-node-pool<\/a><\/p>\n<h2>Show Objects' Events<\/h2>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ kubectl get events -w\n$ kubectl get events -w --sort-by=.metadata.creationTimestamp\n$ kubectl get events -w --sort-by=.metadata.creationTimestamp | grep mongo<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/tasks\/debug-application-cluster\/\">https:\/\/kubernetes.io\/docs\/tasks\/debug-application-cluster\/<\/a><\/p>\n<p>You could find more comprehensive logs on <a href=\"https:\/\/cloud.google.com\/logging\/\">Google Cloud Stackdriver Logging<\/a> if you are using GKE.<\/p>\n<h2>View Pods' Logs on Stackdriver Logging<\/h2>\n<p>You could use the following search formats.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-properties\">textPayload:\"OBJECT_FINALIZE\"\n\nlogName=\"projects\/simple-project-198818\/logs\/worker\"\ntextPayload:\"Added media preset\"\n\nlogName=\"projects\/simple-project-198818\/logs\/beat\"\ntextPayload:\"backend_cleanup\"\n\nresource.labels.pod_id=\"simple-api-6744bf74db-529qf\"\ntextPayload:\"5adb2bd460d6487649fe82ea\"\ntimestamp&gt;=\"2018-04-21T12:00:00Z\"\ntimestamp&lt;=\"2018-04-21T16:00:00Z\"\n\nresource.type=\"k8s_container\"\nresource.labels.cluster_name=\"production\"\nresource.labels.namespace_id=\"default\"\nresource.labels.pod_id:\"simple-worker\"\ntextPayload:\"ConcurrentObjectUseError\"\n\nresource.type=\"k8s_node\"\nresource.labels.location=\"asia-east1\"\nresource.labels.cluster_name=\"production\"\nlogName=\"projects\/simple-project-198818\/logs\/node-problem-detector\"\n\n# see a Pod's logs\nresource.type=\"k8s_container\"\nresource.labels.cluster_name=\"production\"\nresource.labels.namespace_id=\"default\"\nresource.labels.pod_name=\"cache-redis-0\"\n\"start\"\n\n# see a Node's logs\nresource.type=\"k8s_node\"\nresource.labels.location=\"asia-east1\"\nresource.labels.cluster_name=\"production\"\nresource.labels.node_name=\"gke-production-n1-highmem-32-p0-2bd334ec-v4ng\"\n\"start\"<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/tasks\/debug-application-cluster\/logging-stackdriver\/\">https:\/\/kubernetes.io\/docs\/tasks\/debug-application-cluster\/logging-stackdriver\/<\/a><br \/>\n<a href=\"https:\/\/cloud.google.com\/logging\/docs\/view\/advanced-filters\">https:\/\/cloud.google.com\/logging\/docs\/view\/advanced-filters<\/a><\/p>\n<h2>Best Practices<\/h2>\n<p>ref:<br \/>\n<a href=\"https:\/\/cloud.google.com\/solutions\/best-practices-for-building-containers\">https:\/\/cloud.google.com\/solutions\/best-practices-for-building-containers<\/a><br \/>\n<a href=\"https:\/\/medium.com\/@sachin.arote1\/kubernetes-best-practices-9b1435a4cb53\">https:\/\/medium.com\/@sachin.arote1\/kubernetes-best-practices-9b1435a4cb53<\/a><br \/>\n<a href=\"https:\/\/medium.com\/@brendanrius\/scaling-kubernetes-for-25m-users-a7937e3536a0\">https:\/\/medium.com\/@brendanrius\/scaling-kubernetes-for-25m-users-a7937e3536a0<\/a><\/p>\n<h2>Common Issues<\/h2>\n<h2>Switch Contexts<\/h2>\n<p>Get authentication credentials to allow your <code>kubectl<\/code> to interact with the cluster.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ gcloud container clusters get-credentials demo --project simple-project-198818<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/cloud.google.com\/sdk\/gcloud\/reference\/container\/clusters\/get-credentials\">https:\/\/cloud.google.com\/sdk\/gcloud\/reference\/container\/clusters\/get-credentials<\/a><br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/tasks\/access-application-cluster\/configure-access-multiple-clusters\/\">https:\/\/kubernetes.io\/docs\/tasks\/access-application-cluster\/configure-access-multiple-clusters\/<\/a><\/p>\n<p>A Context is roughly a configuration profile which indicates the cluster, the namespace, and the user you use. Contexts are stored in <code>~\/.kube\/config<\/code>.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ kubectl config get-contexts\n$ kubectl config use-context gke_simple-project-198818_asia-east1_demo\n$ kubectl config view<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/concepts\/configuration\/organize-cluster-access-kubeconfig\/\">https:\/\/kubernetes.io\/docs\/concepts\/configuration\/organize-cluster-access-kubeconfig\/<\/a><\/p>\n<p>The recommended way to switch contexts is using <code>fubectl<\/code>.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ kcs<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/github.com\/kubermatic\/fubectl\">https:\/\/github.com\/kubermatic\/fubectl<\/a><\/p>\n<h2>Pending Pods<\/h2>\n<p>One of the most common reasons of Pending Pods is lack of resources.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ kubectl describe pod mongodb-rs0-1\n...\nEvents:\nType       Reason              Age                  From                 Message\n----       ------              ----                 ----                 -------\nWarning    FailedScheduling    3m (x739 over 1d)    default-scheduler    0\/3 nodes are available: 1 ExistingPodsAntiAffinityRulesNotMatch, 1 MatchInterPodAffinity, 1 NodeNotReady, 2 NoVolumeZoneConflict, 3 Insufficient cpu, 3 Insufficient memory, 3 MatchNodeSelector.\n...<\/code><\/pre>\n<p>You could resize nodes in the cluster.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ gcloud container clusters resize demo --node-pool=n1-standard-4-pre --size=5 --region=asia-east1<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/tasks\/run-application\/force-delete-stateful-set-pod\/\">https:\/\/kubernetes.io\/docs\/tasks\/run-application\/force-delete-stateful-set-pod\/<\/a><\/p>\n<h2>Init:Error Pods<\/h2>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ kubectl describe mongodump-sh0-1543978800-bdkhl\n$ kubectl logs mongodump-sh0-1543978800-bdkhl -c mongodump<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/kubernetes.io\/docs\/tasks\/debug-application-cluster\/debug-init-containers\/#accessing-logs-from-init-containers\">https:\/\/kubernetes.io\/docs\/tasks\/debug-application-cluster\/debug-init-containers\/#accessing-logs-from-init-containers<\/a><\/p>\n<h2>CrashLoopBackOff Pods<\/h2>\n<p><code>CrashLoopBackOff<\/code> means the Pod is starting, then crashing, then starting again and crashing again.<\/p>\n<p>When in doubt, <code>kubectl describe<\/code>.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-console\">$ kubectl describe pod the-pod-name\n$ kubectl logs the-pod-name --previous<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/www.krenger.ch\/blog\/crashloopbackoff-and-how-to-fix-it\/\">https:\/\/www.krenger.ch\/blog\/crashloopbackoff-and-how-to-fix-it\/<\/a><br \/>\n<a href=\"https:\/\/sysdig.com\/blog\/debug-kubernetes-crashloopbackoff\/\">https:\/\/sysdig.com\/blog\/debug-kubernetes-crashloopbackoff\/<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Kubernetes is the de facto standard of container orchestration. Google Kubernetes Engine (GKE) is a managed Kubernetes as a Service provided by Google Cloud Platform.<\/p>\n","protected":false},"author":1,"featured_media":610,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[38,116],"tags":[37,114,123,119,31],"class_list":["post-609","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-about-devops","category-about-web-development","tag-celery","tag-google-cloud-platform","tag-kubernetes","tag-mongodb","tag-redis"],"_links":{"self":[{"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/posts\/609","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/comments?post=609"}],"version-history":[{"count":0,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/posts\/609\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/media\/610"}],"wp:attachment":[{"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/media?parent=609"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/categories?post=609"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/tags?post=609"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}