`sysctl -w` only modify parameters at runtime, and they would be set to default values after the system is restarted. You must write those settings in `/etc/sysctl.conf` to persistent them.
# Do less swapping
vm.swappiness = 10
vm.dirty_ratio = 60
vm.dirty_background_ratio = 2
# Prevents SYN DOS attacks. Applies to ipv6 as well, despite name.
net.ipv4.tcp_syncookies = 1
# Prevents ip spoofing.
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
# Only groups within this id range can use ping.
net.ipv4.ping_group_range=999 59999
# Redirects can potentially be used to maliciously alter hosts routing tables.
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.all.secure_redirects = 1
net.ipv6.conf.all.accept_redirects = 0
# The source routing feature includes some known vulnerabilities.
net.ipv4.conf.all.accept_source_route = 0
net.ipv6.conf.all.accept_source_route = 0
# See RFC 1337
net.ipv4.tcp_rfc1337 = 1
# Enable IPv6 Privacy Extensions (see RFC4941 and RFC3041)
net.ipv6.conf.default.use_tempaddr = 2
net.ipv6.conf.all.use_tempaddr = 2
# Restarts computer after 120 seconds after kernel panic
kernel.panic = 120
# Users should not be able to create soft or hard links to files which they do not own. This mitigates several privilege escalation vulnerabilities.
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
$ sudo vim /etc/sysctl.conf
fs.file-max = 601017
$ sudo sysctl -p
$ sudo vim /etc/security/limits.d/nofile.conf
* soft nofile 65535
* hard nofile 65535
root soft nofile 65535
root hard nofile 65535
$ ulimit -n 65535
OS error code 99: Cannot assign requested address
For MySQL. Because there's no available local network ports left. You might need to set `net.ipv4.tcp_tw_reuse = 1` instead of `net.ipv4.tcp_tw_recycle = 1`.
$ vim /etc/rc.local
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
echo "never" > /sys/kernel/mm/transparent_hugepage/enabled
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
echo "never" > /sys/kernel/mm/transparent_hugepage/defrag
fi
$ systemctl enable rc-local
If /etc/rc.local doesn't exist, create one and run chmod 644 /etc/rc.local.
This article is about how to deploy a scalable WordPress site on Google Kubernetes Engine.
Using the container version of the popular LEMP stack:
Linux (Docker containers)
NGINX
MySQL (Google Cloud SQL)
PHP (PHP-FPM)
Google Cloud Platform Pricing
Deploying a personal blog on Kubernetes sounds like overkill (I must admit, it does). Still, it is fun and an excellent practice to containerize a traditional application, WordPress, which is harder than you thought. More importantly, the financial cost of running a Kubernetes cluster on GKE could be pretty low if you use preemptible VMs which also means native Chaos Engineering!
Cloud SQL is the fully managed relational database service on Google Cloud, though it currently only supports MySQL 5.6 and 5.7.
You can simply create a MySQL instance with few clicks on Google Cloud Platform Console or CLI. It is recommended to enable Private IP that allows VPC networking and never exposed to the public Internet. Nevertheless, you have to turn on Public IP if you would like to connect to it from your local machine. Otherwise, you might see something like couldn't connect to "xxx": dial tcp 10.x.x.x:3307: connect: network is unreachable. Remember to set IP whitelists for Public IP.
Connect to a Cloud SQL instance from your local machine:
The master of your Google Kubernetes Engine cluster is managed by GKE itself, as a result, you only need to provision and pay for worker nodes. No cluster management fees.
You can create a Kubernetes cluster on Google Cloud Platform Console or CLI, and there are some useful settings you might like to turn on:
Over-provisioning is human nature, so don't spend too much time on choosing the right machine type for your Kubernetes cluster at the beginning since you are very likely to overprovision without real usage data at hand. Instead, after deploying your workloads, you can find out the actual resource usage from Stackdriver Monitoring or GKE usage metering, then adjust your node pools.
Some useful node pool configurations:
Enable preemptible nodes
Access scopes > Set access for each API:
Enable Cloud SQL
After the cluster is created, you can now configure your kubectl:
Here comes the tricky part, containerizing a WordPress site is not as simple as pulling a Docker image and set replicas: 10 since WordPress is a totally stateful application. Especially:
MySQL Database
The wp-content folder
The dependency on MySQL is relatively easy to solve since it is an external service. Your MySQL database could be managed, self-hosted, single machine, master-slave, or multi-master. However, horizontally scaling a database would be another story, so we only focus on WordPress now.
The next one, our notorious wp-content folder which includes plugins, themes, and uploads.
Users (site owners, editors, or any logged-in users) can upload images or even videos on a WordPress site if you allow them to do so. For those uploaded contents, it is best to copy them to Amazon S3 or Google Cloud Storage automatically after a user uploads a file. Also, don't forget to configure a CDN to point at your bucket. Luckily, there are already plugins for such tasks:
Both storage services support direct uploads: the uploading file goes to S3 or GCS directly without touching your servers, but you might need to write some code to achieve that.
Pre-installed Plugins and Themes
You would usually deploy multiple WordPress Pods in Kubernetes, and each pod has its own resources: CPU, memory, and storage. Anything writes to the local volume is ephemeral that only exists within the Pod's lifecycle. When you install a new plugin through WordPress admin dashboard, the plugin would be only installed on the local disk of one of Pods, the one serves your request at the time. Therefore, your subsequent requests inevitably go to any of the other Pods because of the nature of Service load balancing, and they do not have those plugin files, even the plugin is marked as activated in the database, which causes an inconsistent issue.
There are two solutions for plugins and themes:
A shared writable network filesystem mounted by each Pod
An immutable Docker image which pre-installs every needed plugin and theme
For the first solution, you can either setup an NFS server, a Ceph cluster, or any of network-attached filesystems. An NFS server might be the simplest way, although it could also easily be a single point of failure in your architecture. Fortunately, managed network filesystem services are available in major cloud providers, like Amazon EFS and Google Cloud Filestore. In fact, Kubernetes is able to provide ReadWriteManyaccess mode for PersistentVolume (the volume can be mounted as read-write by many nodes). Still, only a few types of Volume support it, which don't include gcePersistentDisk and awsElasticBlockStore.
However, I personally adopt the second solution, creating Docker images contain pre-installed plugins and themes through CI since it is more immutable and no network latency issue as in NFS. Besides, I don't frequently install new plugins. It is regretful that some plugins might still write data to the local disk directly, and most of the time we can not prevent it.
Just put it into the root directory of your GitHub repository. Don't forget to store Docker images near your server's location, in my case, asia.gcr.io.
Moreover, it is recommended by the official documentation to use --cache-from for speeding up Docker builds.
The wordpress image supports setting configurations through environment variables, though I prefer to store the whole wp-config.php in ConfigMap, which is more convenient. It is also worth noting that you need to use the same set of WordPress secret keys (AUTH_KEY, LOGGED_IN_KEY, etc.) for all of your WordPress replicas. Otherwise, you might encounter login failures due to mismatched login cookies.
Of course, you can use a base64 encoded (NOT ENCRYPTED!) Secret to store sensitive data.
WP-Cron is the way WordPress handles scheduling time-based tasks. The problem is how WP-Cron works: on every page load, a list of scheduled tasks is checked to see what needs to be run. Therefore, you might consider replacing WP-Cron with a regular Kubernetes CronJob.
// in wp-config.php
define('DISABLE_WP_CRON', true);
If a picture is worth a thousand words, then a video is worth a million. This video accurately describes how we ultimately deploy a WordPress site on Kubernetes.
mitmproxy is your swiss-army knife for interactive HTTP/HTTPS proxy. In fact, it can be used to intercept, inspect, modify and replay web traffic such as HTTP/1, HTTP/2, WebSockets, or any other SSL/TLS-protected protocols.
Moreover, mitproxy has a powerful Python API offers full control over any intercepted request and response.
You can use your own certificate by passing the --certs example.com=/path/to/example.com.pem option to mitmproxy. Mitmproxy then uses the provided certificate for interception of the specified domain.
The certificate file is expected to be in the PEM format which would roughly looks like this:
You could use negative regex with --ignore-hosts to only watch specific domains. Of course, you are still able to blacklist any domain you don't want: --ignore-hosts 'apple.com|icloud.com|itunes.com|facebook.com|googleapis.com|crashlytics.com'.
Currently, changing the Host server for HTTP/2 connections is not allowed, but you could just disable HTTP/2 proxy to solve the issue if you don't need HTTP/2 for local development.
A replica set is a group of servers (mongod actually) that maintain the same data set, with one primary which takes client requests, and multiple secondaries that keep copies of the primary's data. If the primary crashes, secondaries can elect a new primary from amongst themselves.
Replication from primary to secondaries is asynchronous.
If your replica set has an even number of members, add an arbiter to obtain a majority of votes in an election for primary. Arbiters do not require dedicated hardware.
InvalidReplicaSetConfig: Our replica set configuration is invalid or does not include us
$ kubectl logs -f mongodb-rs0-0
REPL_HB [replexec-10] Error in heartbeat (requestId: 20048) to mongodb-rs0-2.mongodb-rs0:27017, response status: InvalidReplicaSetConfig: Our replica set configuration is invalid or does not include us
The faulty member's state is REMOVED (it was once in a replica set but was subsequently removed) and shows Our replica set config is invalid or we are not a member of it. In fact, the real issue is that the removed node is sill in the list of replica set members.
You could just manually remove the broken node from the replica set on the primary, restart the node, and re-add the node.
$ mongo mongodb-rs0-0.mongodb-rs0.default.svc.cluster.local
rs0:PRIMARY> rs.remove("mongodb-rs0-2.mongodb-rs0.default.svc.cluster.local:27017")
# restart the Pod
$ kubectl delete mongodb-rs0-2
$ mongo mongodb-rs0-0.mongodb-rs0.default.svc.cluster.local
rs0:PRIMARY> rs.add("mongodb-rs0-2.mongodb-rs0.default.svc.cluster.local:27017")
AWS Lambda lets you run code without provisioning or managing servers, which is so-called Serverless or Function as a Service (FaaS).
Apex is a Go command-line tool to manage and deploy your serverless functions on AWS Lambda. Apex is also integrated with Terraform to provide cloud infrastructure management, for instance, configuring your AWS Lambda functions with Amazon API Gateway.
After running apex init, Apex creates a Role and a Policy. You should be able to find them on AWS IAM Management Console. If you want to access other AWS resources, for instance, S3 buckets, DynamoDB tables, SNS, in your Lambda functions, you must create a new Policy which grants appropriate permissions and attachs itself to the Role that Apex created.
Here is a Policy example of operating certain DynamoDB tables:
Your "Integration Request" configurations in API Gateway should be like:
Integration type: Lambda Function
Use Lambda Proxy integration: Yes
Lambda Region: ap-northeast-1
Lambda Function: panguspace_spacing_text
Invoke with caller credentials: No
Credentials cache: Do not add caller credentials to cache key
Use Default Timeout: Yes
It's also worth noting that the API response is mainly defined by APIGatewayProxyResponse in Lambda function code. Configurations in API Gateway, i.e., "Integration Response" and "Method Response" do not matter.
# donwload provider plugins
$ apex infra init
# view the generated execution plan
$ apex infra plan
# deploy your infrastructures
$ apex infra apply
$ apex infra apply -auto-approve