Expose a Local Service with Cloudflare Tunnel

Expose a Local Service with Cloudflare Tunnel

Expose a service running on your local machine to a remote server without opening any ports. For instance, let your OpenClaw agent (the remote server) access qBittorrent Web UI on your Mac (the local machine), to download a movie for you.

The local machine makes an outbound-only connection to Cloudflare. The remote server hits your subdomain on Cloudflare's edge. Traffic flows:

OpenClaw on your remote server -> https://your-tunnel-name.example.com -> Cloudflare edge servers -> Cloudflare Tunnel -> qBittorrent Web UI on your local machine

You can probably do the same thing with Tailscale, but unfortunately, Tailscale app doesn't work well with Mullvad VPN on macOS (and I don't want to use Tailscale's Mullvad VPN add-on).

ref:
https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/
https://tailscale.com/docs/features/exit-nodes/mullvad-exit-nodes

Setup

1. Create Cloudflare Tunnel

Do this from any device where you're logged into Cloudflare. No login needed on the local machine or the remote server.

  1. Go to Cloudflare Zero Trust dashboard
  2. Networks -> Connectors -> Create a tunnel -> Cloudflared
    • Name your tunnel: your-tunnel-name
  3. Copy the tunnel token
  4. Configure the tunnel you just created -> Published application routes -> Add a published application route
    • Subdomain: your-tunnel-name
    • Domain: select your domain from the dropdown (e.g., example.com)
    • Path: [leave empty]
    • Service:
      • Type: HTTP
      • URL: localhost:8080
  5. After you create the published application route, Cloudflare will automatically create the DNS record for your subdomain

ref:
https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/get-started/tunnel-useful-terms/
https://developers.cloudflare.com/cloudflare-one/networks/routes/add-routes/

2. Access Controls for Cloudflare Tunnel

Still in the Cloudflare Zero Trust dashboard.

  1. Access controls -> Service credentials -> Service Tokens -> Create Service Token
    • Token name: your-token-name
    • Service Token Duration: Non-expiring
    • Save the CF-Access-Client-Id and CF-Access-Client-Secret (shown only once)
  2. Access controls -> Policies -> Add a policy
    • Policy name: your-policy-name
    • Action: Service Auth
    • Session duration: 24 hours
    • Configure rules -> Include:
      • Selector: Service Token
      • Value: select the service token you just created (e.g., your-token-name)
  3. Access controls -> Applications -> Add an application -> Self-hosted
    • Application name: your-tunnel-name
    • Session Duration: 24 hours
    • Add public hostname:
      • Input method: Default
      • Subdomain: your-tunnel-name (must match the subdomain in step 1.4)
      • Domain: select your domain from the dropdown (e.g., example.com)
      • Path: [leave empty]
    • Select existing policies (this text is a clickable button, not a label!)
      • Check the policy you created in step 2.2

ref:
https://developers.cloudflare.com/cloudflare-one/access-controls/service-credentials/service-tokens/
https://developers.cloudflare.com/cloudflare-one/access-controls/policies/
https://developers.cloudflare.com/cloudflare-one/access-controls/applications/http-apps/

3. Run cloudflared on Local Machine (macOS)

Make cloudflared run on boot, connecting outbound to Cloudflare. No browser auth ever needed.

brew install cloudflared

# install as a LaunchAgent using the tunnel token from step 1
sudo cloudflared service install YOUR_TUNNEL_TOKEN

ref:
https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/downloads/

To verify it's running:

sudo launchctl list | grep cloudflared

4. Access the Local Service on Remote Server

Test that the tunnel and access policy work. We're accessing qBittorrent Web UI here:

curl \
  -H "CF-Access-Client-Id: $YOUR_CF_ACCESS_CLIENT_ID" \
  -H "CF-Access-Client-Secret: $YOUR_CF_ACCESS_CLIENT_SECRET" \
  -d "username=YOUR_USERNAME&password=YOUR_PASSWORD" \
  https://your-tunnel-name.example.com/api/v2/auth/login

The CF-Access-XXX headers must be included on every request. Without them, Cloudflare returns a 302 redirect to a login page.

ref:
https://github.com/qbittorrent/qBittorrent/wiki/#webui

Why Cloudflare Tunnel Over Tailscale

  • No login on endpoints: The tunnel token is scoped to one tunnel, can't access your Cloudflare account
  • No VPN conflicts: cloudflared is just outbound HTTPS, Mullvad VPN doesn't care
  • Free: Cloudflare Zero Trust free tier covers this
Claude Code: Things I Learned After Using It Every Day

Claude Code: Things I Learned After Using It Every Day

I've used Claude Code daily since it came out. Here are the best practices, tools, and configuration patterns I've picked up. Most of this applies to other coding agents (Codex, Gemini CLI) too.

TL;DR
My dotfiles, configs, and skills for Claude Code:
https://github.com/vinta/hal-9000

CLAUDE.md (or AGENTS.md)

The Global CLAUDE.md

Your ~/.claude/CLAUDE.md should only contain:

  • Your preferences and nudges to correct agent behaviors
  • You probably don't need to tell it YAGNI or KISS. They're already built in

Pro tip: before adding something to CLAUDE.md, ask it, "Is this already covered in your system prompt?"

Here are some key parts of my CLAUDE.md:

<prefer_online_sources>
Use the find-docs skill or WebSearch to verify before relying on pre-trained knowledge. Look things up when:
- Writing code that uses libraries, APIs, or CLI tools
- Configuring tools, services, or environment variables
- Checking if a stdlib replacement exists for a third-party package
- Pinning dependency versions — always check the latest
- Unsure about exact syntax, flags, or config format
- Making confident assertions about external tool behavior
</prefer_online_sources>

<auto_commit if="you have completed the user's requested change">
Use the commit skill to commit. Don't batch unrelated changes into one commit.
</auto_commit>

References:

The Project CLAUDE.md

For project-specific instructions, put them in the project-level CLAUDE.md.

The highest-signal content in your project CLAUDE.md (or any skill) is the Gotchas section. Build these from the failure points Claude Code actually runs into.

References:

Per File Type Rules

For language-specific or per-file rules, put them in ~/.claude/rules/, so Claude Code only loads them when editing those file types.

For instance, ~/.claude/rules/python.md:

---
paths:
  - "**/*.py"
---

# Python

- When choosing a Python library or tool, search online and check https://awesome-python.com/llms.txt for curated alternatives before picking one
- Before adding a dependency, search PyPI or the web for the latest version
- Pin exact dependency versions in pyproject.toml — no >=, ~=, or ^ specifiers
- Target Python >=3.13 by default — if a project sets an explicit version (e.g. requires-python in pyproject.toml), follow that instead
- Use modern syntax: X | Y unions, match/case, tomllib
- Scripts run by system python3 must work on Python 3.9 — add from __future__ import annotations and avoid 3.10+ stdlib APIs
- Use uv for project and environment management
  - uv run instead of python3 — picks up the project venv and dependencies automatically
- Use ruff for linting and formatting
- Use pytest for testing
  - assert is fine in tests but use # noqa: S101 assert elsewhere
- Use pathlib.Path over os.path
- Use TypedDict for structured dicts (hook inputs, configs) — not plain dicts or dataclasses
- Use keyword-only args (*) for optional/config parameters: def run(cmd, *, shell=True)
- All # noqa comments must include the rule name: # noqa: S603 subprocess-without-shell-equals-true or # noqa: S603 PLW1510 subprocess-without-shell-equals-true subprocess-run-without-check if multiple rules

References:

Settings

If you're not using a sandbox or devcontainer, you may want to block some evil commands in your ~/.claude/settings.json.

{
  "cleanupPeriodDays": 365,
  "env": {
    "CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS": "1",
  },
  "permissions": {
    "deny": [
      "Read(~/.aws/**)",
      "Read(~/.config/**)",
      "Read(~/.docker/**)",
      "Read(~/.dropbox/**)",
      "Read(~/.gnupg/**)",
      "Read(~/.gsutil/**)",
      "Read(~/.kube/**)",
      "Read(~/.npmrc)",
      "Read(~/.orbstack/**)",
      "Read(~/.pypirc)",
      "Read(~/.ssh/**)",
      "Read(~/*history*)",
      "Read(~/**/*credential*)",
      "Read(~/Library/**)",
      "Write(~/Library/**)",
      "Edit(~/Library/**)",
      "Read(~/Dropbox/**)",
      "Write(~/Dropbox/**)",
      "Edit(~/Dropbox/**)",
      "Read(//etc/**)",
      "Write(//etc/**)",
      "Edit(//etc/**)",
      "Bash(su:*)",
      "Bash(sudo:*)",
      "Bash(passwd:*)",
      "Bash(env:*)",
      "Bash(printenv:*)",
      "Bash(history:*)",
      "Bash(fc:*)",
      "Bash(eval:*)",
      "Bash(exec:*)",
      "Bash(rsync:*)",
      "Bash(sftp:*)",
      "Bash(telnet:*)",
      "Bash(socat:*)",
      "Bash(nc:*)",
      "Bash(ncat:*)",
      "Bash(netcat:*)",
      "Bash(nmap:*)",
      "Bash(kill:*)",
      "Bash(killall:*)",
      "Bash(pkill:*)",
      "Bash(chmod:*)",
      "Bash(chown:*)",
      "Bash(chflags:*)",
      "Bash(xattr:*)",
      "Bash(diskutil:*)",
      "Bash(mkfs:*)",
      "Bash(security:*)",
      "Bash(defaults:*)",
      "Bash(launchctl:*)",
      "Bash(osascript:*)",
      "Bash(dscl:*)",
      "Bash(networksetup:*)",
      "Bash(scutil:*)",
      "Bash(systemsetup:*)",
      "Bash(pmset:*)"
    ],
    "ask": [
      "Bash(curl:*)",
      "Bash(wget:*)"
    ]
  },
  "hooks": {
    "PreToolUse": [
      {
        "matcher": "Bash",
        "hooks": [
          {
            "type": "command",
            "command": "python3 ~/.claude/hooks/guard-bash-paths.py"
          }
        ]
      }
    ]
  },
  "statusLine": {
    "type": "command",
    "command": "python3 ~/.claude/statusline/run.py"
  },
  "enabledPlugins": {
    "gh-cli@trailofbits": true,
    "hal-voice@hal-9000": true,
    "skill-creator@claude-plugins-official": true,
    "superpowers@claude-plugins-official": true
  }
}

However, "deny": ["Read(~/.aws/**)", "Read(~/.kube/**)", ...] alone is not enough, since Claude Code can still read sensitive files through the Bash tool. You can write a simple hook to intercept Bash commands that access blocked files, like this guard-bash-paths.py hook.

References:

Plugins

Claude Code Plugins are simply a way to package skills, commands, agents, hooks, and MCP servers. Distributing them as a plugin has the following advantages:

  • Auto update (versioned releases)
  • Auto hooks configuration (users don't need to edit their ~/.claude/settings.json manually)
  • Skills have a /plugin-name:your-skill-name prefix (no more conflicts)

To install a plugin, you need to add a marketplace first. A marketplace is usually just a GitHub repo. Think of it as a namespace.

claude plugin marketplace add anthropics/claude-plugins-official
claude plugin marketplace add trailofbits/skills
claude plugin marketplace add vinta/hal-9000

# then enter Claude Code to browse plugins
/plugins

References:

Here are plugins I use:

Skills

Skills can contain executable scripts and hooks, not just Markdown. Use with caution! When in doubt, have your agent review them first.

References:

Here are skills I use, mostly installed per project:

# my skills
npx skills add https://github.com/vinta/hal-9000 --skill commit magi-ex second-opinions -g
npx skills add https://github.com/vinta/dear-ai

# writing skills
npx skills add https://github.com/softaworks/agent-toolkit --skill writing-clearly-and-concisely humanizer naming-analyzer
npx skills add https://github.com/hardikpandya/stop-slop
npx skills add https://github.com/shyuan/writing-humanizer

# doc skills
npx skills add https://github.com/upstash/context7 --skill find-docs -g

# backend skills
npx skills add https://github.com/trailofbits/skills --skill modern-python
npx skills add https://github.com/vintasoftware/django-ai-plugins --skill django-expert
npx skills add https://github.com/supabase/agent-skills
npx skills add https://github.com/planetscale/database-skills

# frontend skills
npx skills add https://github.com/vercel-labs/agent-skills
npx skills add https://github.com/vercel-labs/next-skills

# design skills
npx skills add https://github.com/openai/skills --skill frontend-skill
npx skills add https://github.com/pbakaus/impeccable
npx skills add https://github.com/Leonxlnx/taste-skill
npx skills add https://github.com/ibelick/ui-skills
npx skills add https://github.com/raphaelsalaja/userinterface-wiki

# video skills
npx skills add https://github.com/remotion-dev/skills

# browser skills
npx skills add https://github.com/microsoft/playwright-cli
npx skills add https://github.com/vercel-labs/agent-browser

npx skills list -g
npx skills update -g
npx skills remove --all -g

Highlights:

  • /brainstorming from superpowers: When in doubt, start with this skill
  • /writing-skills from superpowers: Use this skill to improve your skills
  • /skill-creator from claude-plugins-official: Use this skill to evaluate your skills
  • /frontend-design from impeccable: The better version of the official /frontend-design skill
  • /office-hours from gstack: The heavy version of /brainstorming! (it collects usage telemetry, so remember to say no)
  • /simplify: Run it often, you will like it
  • /insights: Analyze your Claude Code sessions

You can find more skills on skills.sh.

MCP Servers

You probably don't need any MCP servers if you can do the same thing with CLI + skills.

Playwright MCP

No, you should use the playwright-cli or agent-browser skill instead.

npm install -g @playwright/cli@latest

npm install -g agent-browser
agent-browser install

References:

GitHub MCP

No, you should use the gh command instead.

brew install gh

Trail of Bits' gh-cli plugin is also worth a look, though you should check how it uses hooks to intercept GitHub fetch requests. Quite controversial for a security company.

References:

Codex MCP

Yes, ironically. Other coding agents like Claude Code can use Codex via MCP, which is slightly more stable than invoking it with codex exec via CLI.

# Codex reads your local .codex/config.toml by default
claude mcp add codex --scope user -- codex mcp-server

# You can still override some configs
claude mcp add codex --scope user -- codex -m gpt-5.3-codex-spark -c model_reasoning_effort="medium" mcp-server

References:

Some Other Tips

Prompt Best Practices

Command Aliases

# in ~/.zshrc
alias cc="claude --teammate-mode tmux"
alias ccc="claude --continue --teammate-mode tmux"
alias cct='tmux -CC new-session -s "claude-$(date +%s)" claude --teammate-mode tmux'
alias ccy="claude --teammate-mode tmux --dangerously-skip-permissions"
ccp() { claude --no-chrome --no-session-persistence -p "$*"; }

Use ccp for ad-hoc prompts:

ccp "commit"
ccp "list all .md in this repo"

References:

Customize Your Statusline

Claude Code has a customizable statusline at the bottom of the terminal. You can run any script that outputs text.

Mine shows the current model, the current working folder, the git branch, and a grammar-corrected version of my last prompt (because my English needs all the help it can get). The grammar correction runs an ad-hoc claude command inside the statusline script.

Claude Code Statusline with English Grammar Check example

References:

Run Ad-Hoc Claude Commands Inside Scripts

You can invoke claude as a one-shot CLI tool from hooks, statusline scripts, CI, or anywhere else. The trick is using the right flags to get a clean, isolated call with zero side effects:

cmd = """
    claude
    --model haiku
    --max-turns 1
    --setting-sources ""
    --tools ""
    --disable-slash-commands
    --no-session-persistence
    --no-chrome
    --print
"""

result = subprocess.run(
    [*shlex.split(cmd), your_prompt],
    capture_output=True,
    text=True,
    timeout=15,
    cwd="/tmp",
)

What each flag does:

  • --setting-sources "": don't load hooks (avoids infinite recursion if called from a hook)
  • --no-session-persistence and cwd="/tmp": avoid polluting your current context
  • --tools "": no file access, no bash, pure text in/out
  • --no-chrome: skip the Chrome integration

Multi-Model Second Opinions

You can get independent code reviews or brainstorming input from other model families (Codex, Gemini) without leaving Claude Code. I have two skills for this:

  • magi-ex: Evangelion's MAGI system as a brainstorming panel. Three personas (Scientist/Opus, Mother/Codex, Woman/Gemini) deliberate in parallel
  • second-opinions: Asks Codex and/or Gemini to review code, plans, or docs, then synthesizes their feedback

This works because each model family has different training biases. Claude might miss something Codex catches, and vice versa. It's especially useful for architecture decisions and "what should I build next" brainstorming.

References:

Cloudflare Quick Tunnel (TryCloudflare)

Cloudflare Quick Tunnel (TryCloudflare)

Expose your local server to the Internet with one cloudflared command (just like ngrok). No account registration needed, no installation required (via docker run), and free.

# assume your local server is at http://localhost:300
docker run --rm -it cloudflare/cloudflared tunnel --url http://localhost:300

# if your local server is running inside a Docker container
docker run --rm -it cloudflare/cloudflared tunnel --url http://host.docker.internal:3000

ref:
https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/do-more-with-tunnels/trycloudflare/

You will see something like this in console:

+--------------------------------------------------------------------------------------------+
|  Your quick Tunnel has been created! Visit it at (it may take some time to be reachable):  |
|  https://YOUR_RANDOM_QUICK_TUNNEL_NAME.trycloudflare.com                                   |
+--------------------------------------------------------------------------------------------+

Then you're all set.

1Password CLI: How NOT to Store Plaintext AWS Credentials or .env on Localhost

1Password CLI: How NOT to Store Plaintext AWS Credentials or .env on Localhost

No More ~/.aws/credetials

According to AWS security best practices, human users should access AWS services using short-term credentials provided by IAM Identity Center. Long-term credentials ("Access Key ID" and "Secret Access Key") created by IAM users should be avoided, especially since they are often stored in plaintext on disk: ~/.aws/credetials.

However, if you somehow have to use AWS access keys but want an extra layer of protection, 1Password CLI can help.

ref:
https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
https://developer.1password.com/docs/cli/get-started

First, delete your local plaintext AWS credentials. Don't worry, you could generate new one any time on AWS Management Console.

rm -rf ~/.aws/credetials

Re-create aws-cli configuration file, but DO NOT provide any credentials.

aws configure

AWS Access Key ID [None]: JUST PRESS ENTER, DO NOT TYPE ANYTHING
AWS Secret Access Key [None]: JUST PRESS ENTER, DO NOT TYPE ANYTHING
Default region name [None]: ap-northeast-1
Default output format [None]: json

Edit ~/.aws/credentials:

[your-profile-name]
credential_process = sh -c "op item get "AWS Access Key" --account=my.1password.com --vault=Private --format=json --fields label=AccessKeyId,label=SecretAccessKey | jq 'map({key: .label, value: .value}) | from_entries + {Version: 1}'"

The magic is credential_process which sourcing AWS credentials from an external process: 1Password CLI's op item get command.

The one-liner script assumes you have an item named AWS Access Key in a vault named Private in 1Password, and the item has following fields:

  • AccessKeyId
  • SecretAccessKey

ref:
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-sourcing-external.html
https://developer.1password.com/docs/cli/reference/management-commands/item#item-get

That's it.

When you run aws-cli commands or access AWS services from your code via aws-sdk, your terminal will prompt you to unlock 1Password with biometrics to source AWS credentials (once per terminal session). No more plaintext AWS access keys on localhost!

# aws-cli
aws s3 ls --profile=perp
aws logs tail --profile=perp --region=ap-northeast-1 /aws/containerinsights/perp-staging/application --follow

# aws-sdk
AWS_PROFILE=perp OTHER_ENV=123 ts-node src/index.ts

# serverless v4 supports credential_process by default
# serverless v3 requires installing a plugin: serverless-better-credentials
# https://github.com/thomasmichaelwallace/serverless-better-credentials
sls deploy --stage=staging --aws-profile=perp

# if you're using serverless-offline, you might need to add the following configs to serverless.yml
custom:
  serverless-offline:
    useInProcess: true

It's worth noting that if you prefer not to use 1Password, there is also a tool called aws-vault which can achieve a similar goal.

ref:
https://github.com/99designs/aws-vault

No More .env

If you would like to store .env file entirely in 1Password, try 1Password Environments.

ref:
https://developer.1password.com/docs/environments
https://developer.1password.com/docs/environments/local-env-file

sysctl: Linux System Tweaking

sysctl: Linux System Tweaking

sysctl is a command-line tool to modify kernel parameters at runtime in Linux.

ref:
http://man7.org/linux/man-pages/man8/sysctl.8.html

Usage

List All Parameters

$ sudo sysctl -a
$ sudo sysctl -a | grep tcp

The parameters available are those listed under /proc/sys/.

$ cat /proc/sys/net/core/somaxconn
1024

Show the Entry of a Specified Parameter

$ sudo sysctl net.core.somaxconn
net.core.somaxconn = 1024

### Show the Value of a Specified Parameter

```console
$ sysctl -n net.core.somaxconn
1024

Change a Specified Parameter

# Elasticsearch
$ sysctl -w vm.max_map_count = 262143

# Redis
$ sysctl -w vm.overcommit_memory = 1

ref:
https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html
https://redis.io/topics/admin

Persistence

sysctl -w only modify parameters at runtime, and they would be set to default values after the system is restarted. You must write those settings in /etc/sysctl.conf to persist them.

# Do less swapping
vm.swappiness = 10
vm.dirty_ratio = 60
vm.dirty_background_ratio = 2

# Prevents SYN DOS attacks. Applies to ipv6 as well, despite name.
net.ipv4.tcp_syncookies = 1

# Prevents ip spoofing.
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1

# Only groups within this id range can use ping.
net.ipv4.ping_group_range=999 59999

# Redirects can potentially be used to maliciously alter hosts routing tables.
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.all.secure_redirects = 1
net.ipv6.conf.all.accept_redirects = 0

# The source routing feature includes some known vulnerabilities.
net.ipv4.conf.all.accept_source_route = 0
net.ipv6.conf.all.accept_source_route = 0

# See RFC 1337
net.ipv4.tcp_rfc1337 = 1

# Enable IPv6 Privacy Extensions (see RFC4941 and RFC3041)
net.ipv6.conf.default.use_tempaddr = 2
net.ipv6.conf.all.use_tempaddr = 2

# Restarts computer after 120 seconds after kernel panic
kernel.panic = 120

# Users should not be able to create soft or hard links to files which they do not own. This mitigates several privilege escalation vulnerabilities.
fs.protected_hardlinks = 1
fs.protected_symlinks = 1

ref:
https://blog.runcloud.io/how-to-secure-your-linux-server/
https://www.percona.com/blog/2019/02/25/mysql-challenge-100k-connections/
https://www.nginx.com/blog/tuning-nginx/

Activate parameters from the configuration file.

$ sudo sysctl -p

Troubleshooting

OS error code 24: Too many open files

$ sudo vim /etc/sysctl.conf
fs.file-max = 601017

$ sudo sysctl -p

$ sudo vim /etc/security/limits.d/nofile.conf
* soft nofile 65535
* hard nofile 65535
root soft nofile 65535
root hard nofile 65535

$ ulimit -n 65535

OS error code 99: Cannot assign requested address

For MySQL. Because there's no available local network ports left. You might need to set net.ipv4.tcp_tw_reuse = 1 instead of net.ipv4.tcp_tw_recycle = 1.

$ sudo vim /etc/sysctl.conf
net.ipv4.tcp_tw_reuse = 1

$ sudo sysctl -p

ref:
https://www.percona.com/blog/2014/12/08/what-happens-when-your-application-cannot-open-yet-another-connection-to-mysql/
https://stackoverflow.com/questions/6426253/tcp-tw-reuse-vs-tcp-tw-recycle-which-to-use-or-both

Parameters are missing from sysctl -a or /proc/sys

Sometimes you might find some parameters are not in sysctl -a or /proc/sys.

You can find them in /sys:

$ echo "never" > /sys/kernel/mm/transparent_hugepage/enabled
$ echo "never" > /sys/kernel/mm/transparent_hugepage/defrag

$ cat /sys/kernel/mm/transparent_hugepage/enabled

To persist them:

$ vim /etc/rc.local
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
   echo "never" > /sys/kernel/mm/transparent_hugepage/enabled
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
   echo "never" > /sys/kernel/mm/transparent_hugepage/defrag
fi

$ systemctl enable rc-local

If /etc/rc.local doesn't exist, create one and run chmod 644 /etc/rc.local.

ref:
https://redis.io/topics/admin
https://unix.stackexchange.com/questions/99154/disable-transparent-hugepages