Cloudflare Quick Tunnel

Cloudflare Quick Tunnel

Expose your local server to the Internet with one cloudflared command (just like ngrok). No installation required, no account registration needed, and free.

# assume your local server is at http://localhost:300
docker run --rm -it cloudflare/cloudflared tunnel --url http://host.docker.internal:3000

ref:
https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/do-more-with-tunnels/trycloudflare/

GKE Autopilot Cluster: Pay for Pods, Not Nodes

GKE Autopilot Cluster: Pay for Pods, Not Nodes

If you're already on Google Cloud, it's highly recommended to use GKE Autopilot Cluster: you only pay for resources requested by your pods (system pods and unused resources are free in Autopilot Cluster). No need to pay for surplus node pools anymore! Plus the entire cluster applies Google's best practices by default.

ref:
https://cloud.google.com/kubernetes-engine/pricing#compute

Create an Autopilot Cluster

DO NOT enable Private Nodes, otherwise you MUST pay for a Cloud NAT Gateway (~$32/month) for them to access the internet (to pull images, etc.).

# create
gcloud container clusters create-auto my-auto-cluster 
--project YOUR_PROJECT_ID 
--region us-west1

# connect
gcloud container clusters get-credentials my-auto-cluster 
--project YOUR_PROJECT_ID 
--region us-west1

You can update some configurations later on Google Cloud Console.

ref:
https://docs.cloud.google.com/sdk/gcloud/reference/container/clusters/create-auto

Autopilot mode works in both Autopilot and Standard clusters. You don't necessarily need to create a new Autopilot cluster; you can simply deploy your pods in Autopilot mode as long as your Standard cluster meets the requirements:

gcloud container clusters check-autopilot-compatibility my-cluster 
--project YOUR_PROJECT_ID 
--region us-west1

ref:
https://cloud.google.com/kubernetes-engine/docs/concepts/about-autopilot-mode-standard-clusters
https://cloud.google.com/kubernetes-engine/docs/how-to/autopilot-classes-standard-clusters

Deploy Workloads in Autopilot Mode

The only thing you need to do is add one magical config: nodeSelector: cloud.google.com/compute-class: "autopilot". That's it. You don't need to create or manage any node pools beforehand, just write some YAMLs and kubectl apply. All workloads with cloud.google.com/compute-class: "autopilot" will run in Autopilot mode.

More importantly, you are only billed for the CPU/memory resources your pods request, not for nodes that may have unused capacity or system pods (those running under the kube-system namespace). Autopilot mode is both cost-efficient and developer-friendly.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      nodeSelector:
        cloud.google.com/compute-class: "autopilot"
      containers:
        - name: nginx
          image: nginx:1.29.3
          ports:
            - name: http
              containerPort: 80
          resources:
            requests:
              cpu: 100m
              memory: 128Mi
            limits:
              cpu: 500m
              memory: 256Mi

If your workloads are fault-tolerant (stateless), you can use Spot instances to save a significant amount of money. Just change the nodeSelector to autopilot-spot:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  template:
    spec:
      nodeSelector:
        cloud.google.com/compute-class: "autopilot-spot"
      terminationGracePeriodSeconds: 25 # spot instances have a 25s warning before preemption

ref:
https://docs.cloud.google.com/kubernetes-engine/docs/how-to/autopilot-classes-standard-clusters

You will see something like this in your Autopilot cluster:

kubectl get nodes
NAME                             STATUS   ROLES    AGE     VERSION
gk3-my-auto-cluster-nap-xxx      Ready    <none>   2d18h   v1.33.5-gke.1201000
gk3-my-auto-cluster-nap-xxx      Ready    <none>   1d13h   v1.33.5-gke.1201000
gk3-my-auto-cluster-pool-1-xxx   Ready    <none>   86m     v1.33.5-gke.1201000

The nap nodes are auto-provisioned by Autopilot for your workloads, while pool-1 is a default node pool created during cluster creation. System pods may run on either, but in Autopilot cluster, you are never billed for the nodes themselves (neither nap nor pool-1), nor for the system pods. You only pay for the resources requested by your application pods.

FYI, the minimum resources for Autopilot workloads are:

  • CPU: 50m
  • Memory: 52Mi

Additionally, Autopilot applies the following default resource requests if not specified:

  • Containers in DaemonSets
    • CPU: 50m
    • Memory: 100Mi
    • Ephemeral storage: 100Mi
  • All other containers
    • Ephemeral storage: 1Gi

ref:
https://docs.cloud.google.com/kubernetes-engine/docs/concepts/autopilot-resource-requests

Exclude Prometheus Metrics

You may see Prometheus Samples Ingested in your billing. If you don't need (or don't care about) Prometheus metrics for observability, you could exclude them:

  • Go to Google Cloud Console -> Monitoring -> Metrics Management -> Excluded Metrics -> Metrics Exclusion
  • If you want to exclude all:
    • prometheus.googleapis.com/.*
  • If you only want to exclude some:
    • prometheus.googleapis.com/container_.*
    • prometheus.googleapis.com/kubelet_.*

It's worth noting that excluding Prometheus metrics won't affect your HorizontalPodAutoscaler (HPA) which is using Metrics Server instead.

ref:
https://console.cloud.google.com/monitoring/metrics-management/excluded-metrics
https://docs.cloud.google.com/stackdriver/docs/managed-prometheus/cost-controls

Stop Paying for Kubernetes Load Balancers: Use Cloudflare Tunnel Instead

Stop Paying for Kubernetes Load Balancers: Use Cloudflare Tunnel Instead

To expose services in a Kubernetes cluster, you typically need an Ingress backed by a cloud provider's load balancer, and often a NAT Gateway. For small projects, these costs add up fast (though someone may argue small projects shouldn't use Kubernetes at all).

What if you could ditch the Ingress, Load Balancer, and Public IP entirely? Enter Cloudflare Tunnel (by the way, it costs $0).

How Cloudflare Tunnel Works

Cloudflare Tunnel relies on a lightweight daemon called cloudflared that runs within your cluster to establish secure, persistent outbound connections to Cloudflare's global network (edge servers). Instead of accepting incoming connections, your server runs cloudflared to dial out and establish a secure tunnel with Cloudflare. Then it creates a bidirectional tunnel that allows Cloudflare to route requests to your private services while blocking all direct inbound access to your origin servers.

So basically Cloudflare Tunnel acts as a reverse proxy that routes traffic from Cloudflare edge servers to your private services: Internet -> Cloudflare Edge Server -> Tunnel -> cloudflared -> Service -> Pod.

ref:
https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/

Create a Tunnel

A tunnel links your origin to Cloudflare's global network. It is a logical connection that enables secure, persistent outbound connections to Cloudflare's global network (Cloudflare Edge Servers).

  • Go to https://one.dash.cloudflare.com/ -> Networks -> Connectors -> Create a tunnel -> Select cloudflared
  • Tunnel name: your-tunnel-name
  • Choose an operating system: Docker

Instead of running any installation command, simply copy the token (starts with eyJ...). We will use it later.

ref:
https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/deployment-guides/kubernetes/

Configure Published Application Routes

First of all, make sure you host your domains on Cloudflare, so the following setup can update your domain's DNS records automatically.

Assume you have the following Services in your Kubernetes cluster:

apiVersion: v1
kind: Service
metadata:
  name: my-blog
spec:
  selector:
    app: my-blog
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: http
---
apiVersion: v1
kind: Service
metadata:
  name: frontend
spec:
  selector:
    app: frontend
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: http

You need to configure your published application routes based on your Services, for instance:

  • Route 1:
    • Domain: example.com
    • Path: blog
    • Type: HTTP
    • URL: my-blog.default:80 => format: your-service.your-namespace:your-service-port
  • Route 2:
    • Domain: example.com
    • Path: (leave it blank)
    • Type: HTTP
    • URL: frontend.default:80 => format: your-service.your-namespace:your-service-port

Deploy cloudflared to Kubernetes

We will deploy cloudflared as a Deployment in Kubernetes. It acts as a connector that routes traffic from Cloudflare's global network directly to your private services. You don't need to expose any of your services to the public Internet.

apiVersion: v1
kind: Secret
metadata:
  name: cloudflared-tunnel-token
stringData:
  token: YOUR_TUNNEL_TOKEN
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tunnel
spec:
  replicas: 3
  selector:
    matchLabels:
      app: tunnel
  template:
    metadata:
      labels:
        app: tunnel
    spec:
      terminationGracePeriodSeconds: 25
      nodeSelector:
        cloud.google.com/compute-class: "autopilot-spot"
      securityContext:
        sysctls:
          # Allows ICMP traffic (ping, traceroute) to resources behind cloudflared
          - name: net.ipv4.ping_group_range
            value: "65532 65532"
      containers:
        - name: cloudflared
          image: cloudflare/cloudflared:latest
          command:
            - cloudflared
            - tunnel
            - --no-autoupdate
            - --loglevel
            - debug
            - --metrics
            - 0.0.0.0:2000
            - run
          env:
            - name: TUNNEL_TOKEN
              valueFrom:
                secretKeyRef:
                  name: cloudflared-tunnel-token
                  key: token
          livenessProbe:
            httpGet:
              # Cloudflared has a /ready endpoint which returns 200 if and only if it has an active connection to Cloudflare's network
              path: /ready
              port: 2000
            failureThreshold: 1
            initialDelaySeconds: 10
            periodSeconds: 10
          resources:
            requests:
              cpu: 50m
              memory: 128Mi
            limits:
              cpu: 200m
              memory: 256Mi

ref:
https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/configure-tunnels/cloudflared-parameters/run-parameters/

kubectl apply -f cloudflared/deployment.yml

That's it! Check the Cloudflare dashboard, and you should see your tunnel status as HEALTHY.

You can now safely delete your Ingress and the underlying load balancer. You don't need them anymore. Enjoy your secure, cost-effective cluster!

1Password CLI: How NOT to Store Plaintext AWS Credentials or .env on Localhost

1Password CLI: How NOT to Store Plaintext AWS Credentials or .env on Localhost

No More ~/.aws/credetials

According to AWS security best practices, human users should access AWS services using short-term credentials provided by IAM Identity Center. Long-term credentials ("Access Key ID" and "Secret Access Key") created by IAM users should be avoided, especially since they are often stored in plaintext on disk: ~/.aws/credetials.

However, if you somehow have to use AWS access keys but want an extra layer of protection, 1Password CLI can help.

ref:
https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
https://developer.1password.com/docs/cli/get-started

First, delete your local plaintext AWS credentials. Don't worry, you could generate new one any time on AWS Management Console.

rm -rf ~/.aws/credetials

Re-create aws-cli configuration file, but DO NOT provide any credentials.

aws configure

AWS Access Key ID [None]: JUST PRESS ENTER, DO NOT TYPE ANYTHING
AWS Secret Access Key [None]: JUST PRESS ENTER, DO NOT TYPE ANYTHING
Default region name [None]: ap-northeast-1
Default output format [None]: json

Edit ~/.aws/credentials:

[your-profile-name]
credential_process = sh -c "op item get "AWS Access Key" --account=my.1password.com --vault=Private --format=json --fields label=AccessKeyId,label=SecretAccessKey | jq 'map({key: .label, value: .value}) | from_entries + {Version: 1}'"

The magic is credential_process which sourcing AWS credentials from an external process: 1Password CLI's op item get command.

The one-liner script assumes you have an item named AWS Access Key in a vault named Private in 1Password, and the item has following fields:

  • AccessKeyId
  • SecretAccessKey

ref:
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-sourcing-external.html
https://developer.1password.com/docs/cli/reference/management-commands/item#item-get

That's it.

When you run aws-cli commands or access AWS services from your code via aws-sdk, your terminal will prompt you to unlock 1Password with biometrics to source AWS credentials (once per terminal session). No more plaintext AWS access keys on localhost!

# aws-cli
aws s3 ls --profile=perp
aws logs tail --profile=perp --region=ap-northeast-1 /aws/containerinsights/perp-staging/application --follow

# aws-sdk
AWS_PROFILE=perp OTHER_ENV=123 ts-node src/index.ts

# serverless v4 supports credential_process by default
# serverless v3 requires installing a plugin: serverless-better-credentials
# https://github.com/thomasmichaelwallace/serverless-better-credentials
sls deploy --stage=staging --aws-profile=perp

# if you're using serverless-offline, you might need to add the following configs to serverless.yml
custom:
  serverless-offline:
    useInProcess: true

It's worth noting that if you prefer not to use 1Password, there is also a tool called aws-vault which can achieve a similar goal.

ref:
https://github.com/99designs/aws-vault

No More .env

If you would like to store .env file entirely in 1Password, try 1Password Environments.

ref:
https://developer.1password.com/docs/environments
https://developer.1password.com/docs/environments/local-env-file

Solidity: call() vs delegatecall()

Solidity: call() vs delegatecall()

tl;dr: delegatecall runs in the context of the caller contract.

The difference between call and delegatecall in Solidity relates to the execution context:

  • target.call(funcData):
    • the function reads/modifies target contract's storage
    • msg.sender is the caller contract
  • target.delegatecall(funcData)
    • the function reads/modifies caller contract's storage
    • msg.sender is the original sender == caller contract's msg.sender
// SPDX-License-Identifier: GPL-3.0-or-later
pragma solidity 0.8.24;

import "forge-std/Test.sol";

contract Target {
    address public owner;
    uint256 public value;

    function setOwnerAndValue(uint256 valueArg) public {
        owner = msg.sender;
        value = valueArg;
    }
}

contract Caller {
    address public owner;
    uint256 public value;

    function callSetOwnerAndValue(address target, uint256 valueArg) public {
        (bool success, ) = target.call(abi.encodeWithSignature("setOwnerAndValue(uint256)", valueArg));
        require(success, "call failed");
    }

    function delegatecallSetOwnerAndValue(address target, uint256 valueArg) public {
        (bool success, ) = target.delegatecall(abi.encodeWithSignature("setOwnerAndValue(uint256)", valueArg));
        require(success, "delegatecall failed");
    }
}

contract MyTest is Test {
    address sender = makeAddr("sender");
    Target target;
    Caller caller;

    function setUp() public {
        target = new Target();
        caller = new Caller();

        assertEq(target.owner(), address(0));
        assertEq(target.value(), 0);
        assertEq(caller.owner(), address(0));
        assertEq(caller.value(), 0);
    }

    function test_callSetOwnerAndValue() public {
        vm.prank(sender);
        caller.callSetOwnerAndValue(address(target), 100);

        // call modifies target contract's state, and target contract's msg.sender is caller contract
        assertEq(target.owner(), address(caller));
        assertEq(target.value(), 100);

        // caller contract's state didn't change
        assertEq(caller.owner(), address(0));
        assertEq(caller.value(), 0);
    }

    function test_delegatecallSetOwnerAndValue() public {
        vm.prank(sender);
        caller.delegatecallSetOwnerAndValue(address(target), 200);

        // target contract's state didn't change
        assertEq(target.owner(), address(0));
        assertEq(target.value(), 0);

        // delegatecall runs in the context of caller contract, so msg.sender is sender
        assertEq(caller.owner(), sender);
        assertEq(caller.value(), 200);
    }
}

ref:
https://medium.com/0xmantle/solidity-series-part-3-call-vs-delegatecall-8113b3c76855