Stop Paying for Kubernetes Load Balancers: Use Cloudflare Tunnel Instead

Stop Paying for Kubernetes Load Balancers: Use Cloudflare Tunnel Instead

To expose services in a Kubernetes cluster, you typically need an Ingress backed by a cloud provider's load balancer, and often a NAT Gateway. For small projects, these costs add up fast (though someone may argue small projects shouldn't use Kubernetes at all).

What if you could ditch the Ingress, Load Balancer, and Public IP entirely? Enter Cloudflare Tunnel (by the way, it costs $0).

How Cloudflare Tunnel Works

Cloudflare Tunnel relies on a lightweight daemon called cloudflared that runs within your cluster to establish secure, persistent outbound connections to Cloudflare's global network (edge servers). Instead of opening inbound firewall ports or configuring public IP addresses, cloudflared initiates traffic from inside your cluster to the Cloudflare edge servers. This outbound-only model creates a bidirectional tunnel that allows Cloudflare to route requests to your private services while blocking all direct inbound access to your origin servers.

So basically Cloudflare Tunnel acts as a reverse proxy that routes traffic from Cloudflare edge servers to your private services: Internet -> Cloudflare Edge Server -> Tunnel -> cloudflared -> Service -> Pod.

ref:
https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/

Create a Tunnel

A tunnel links your origin to Cloudflare's global network. It is a logical connection that enables secure, persistent outbound connections to Cloudflare's global network (Cloudflare Edge Servers).

  • Go to https://one.dash.cloudflare.com/ -> Networks -> Connectors -> Create a tunnel -> Select cloudflared
  • Tunnel name: your-tunnel-name
  • Choose an operating system: Docker

Instead of running any installation command, simply copy the token (starts with eyJ...). We will use it later.

ref:
https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/deployment-guides/kubernetes/

Configure Published Application Routes

First of all, make sure you host your domains on Cloudflare, so the following setup can update your domain's DNS records automatically.

Assume you have the following Services in your Kubernetes cluster:

apiVersion: v1
kind: Service
metadata:
  name: my-blog
spec:
  selector:
    app: my-blog
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: http
---
apiVersion: v1
kind: Service
metadata:
  name: frontend
spec:
  selector:
    app: frontend
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: http

You need to configure your published application routes based on your Services, for instance:

  • Route 1:
    • Domain: example.com
    • Path: blog
    • Type: HTTP
    • URL: my-blog.default:80 => format: your-service.your-namespace:your-service-port
  • Route 2:
    • Domain: example.com
    • Path: (leave it blank)
    • Type: HTTP
    • URL: frontend.default:80 => format: your-service.your-namespace:your-service-port

Deploy cloudflared to Kubernetes

We will deploy cloudflared as a Deployment in Kubernetes. It acts as a connector that routes traffic from Cloudflare's global network directly to your private services. You don't need to expose any of your services to the public Internet.

apiVersion: v1
kind: Secret
metadata:
  name: cloudflared-tunnel-token
stringData:
  token: YOUR_TUNNEL_TOKEN
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tunnel
spec:
  replicas: 3
  selector:
    matchLabels:
      app: tunnel
  template:
    metadata:
      labels:
        app: tunnel
    spec:
      terminationGracePeriodSeconds: 25
      nodeSelector:
        cloud.google.com/compute-class: "autopilot-spot"
      securityContext:
        sysctls:
          # Allows ICMP traffic (ping, traceroute) to resources behind cloudflared
          - name: net.ipv4.ping_group_range
            value: "65532 65532"
      containers:
        - name: cloudflared
          image: cloudflare/cloudflared:latest
          command:
            - cloudflared
            - tunnel
            - --no-autoupdate
            - --loglevel
            - debug
            - --metrics
            - 0.0.0.0:2000
            - run
          env:
            - name: TUNNEL_TOKEN
              valueFrom:
                secretKeyRef:
                  name: cloudflared-tunnel-token
                  key: token
          livenessProbe:
            httpGet:
              # Cloudflared has a /ready endpoint which returns 200 if and only if it has an active connection to Cloudflare's network
              path: /ready
              port: 2000
            failureThreshold: 1
            initialDelaySeconds: 10
            periodSeconds: 10
          resources:
            requests:
              cpu: 50m
              memory: 128Mi
            limits:
              cpu: 200m
              memory: 256Mi

ref:
https://developers.cloudflare.com/cloudflare-one/networks/connectors/cloudflare-tunnel/configure-tunnels/cloudflared-parameters/run-parameters/

kubectl apply -f cloudflared/deployment.yml

That's it! Check the Cloudflare dashboard, and you should see your tunnel status as HEALTHY.

You can now safely delete your Ingress and the underlying load balancer. You don't need them anymore. Enjoy your secure, cost-effective cluster!

1Password CLI: How NOT to Store Plaintext AWS Credentials or .env on Localhost

1Password CLI: How NOT to Store Plaintext AWS Credentials or .env on Localhost

No More ~/.aws/credetials

According to AWS security best practices, human users should access AWS services using short-term credentials provided by IAM Identity Center. Long-term credentials ("Access Key ID" and "Secret Access Key") created by IAM users should be avoided, especially since they are often stored in plaintext on disk: ~/.aws/credetials.

However, if you somehow have to use AWS access keys but want an extra layer of protection, 1Password CLI can help.

ref:
https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
https://developer.1password.com/docs/cli/get-started

First, delete your local plaintext AWS credentials. Don't worry, you could generate new one any time on AWS Management Console.

rm -rf ~/.aws/credetials

Re-create aws-cli configuration file, but DO NOT provide any credentials.

aws configure

AWS Access Key ID [None]: JUST PRESS ENTER, DO NOT TYPE ANYTHING
AWS Secret Access Key [None]: JUST PRESS ENTER, DO NOT TYPE ANYTHING
Default region name [None]: ap-northeast-1
Default output format [None]: json

Edit ~/.aws/credentials:

[your-profile-name]
credential_process = sh -c "op item get \"AWS Access Key\" --account=my.1password.com --vault=Private --format=json --fields label=AccessKeyId,label=SecretAccessKey | jq 'map({key: .label, value: .value}) | from_entries + {Version: 1}'"

The magic is credential_process which sourcing AWS credentials from an external process: 1Password CLI's op item get command.

The one-liner script assumes you have an item named AWS Access Key in a vault named Private in 1Password, and the item has following fields:

  • AccessKeyId
  • SecretAccessKey

ref:
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-sourcing-external.html
https://developer.1password.com/docs/cli/reference/management-commands/item#item-get

That's it.

When you run aws-cli commands or access AWS services from your code via aws-sdk, your terminal will prompt you to unlock 1Password with biometrics to source AWS credentials (once per terminal session). No more plaintext AWS access keys on localhost!

# aws-cli
aws s3 ls --profile=perp
aws logs tail --profile=perp --region=ap-northeast-1 /aws/containerinsights/perp-staging/application --follow

# aws-sdk
AWS_PROFILE=perp OTHER_ENV=123 ts-node src/index.ts

# serverless v4 supports credential_process by default
# serverless v3 requires installing a plugin: serverless-better-credentials
# https://github.com/thomasmichaelwallace/serverless-better-credentials
sls deploy --stage=staging --aws-profile=perp

# if you're using serverless-offline, you might need to add the following configs to serverless.yml
custom:
  serverless-offline:
    useInProcess: true

It's worth noting that if you prefer not to use 1Password, there is also a tool called aws-vault which can achieve a similar goal.

ref:
https://github.com/99designs/aws-vault

No More .env

If you would like to store .env file entirely in 1Password, try 1Password Environments.

ref:
https://developer.1password.com/docs/environments
https://developer.1password.com/docs/environments/local-env-file

Solidity: call() vs delegatecall()

Solidity: call() vs delegatecall()

tl;dr: delegatecall runs in the context of the caller contract.

The difference between call and delegatecall in Solidity relates to the execution context:

  • target.call(funcData):
    • the function reads/modifies target contract's storage
    • msg.sender is the caller contract
  • target.delegatecall(funcData)
    • the function reads/modifies caller contract's storage
    • msg.sender is the original sender == caller contract's msg.sender

![[Attachments/call.webp]]

![[Attachments/delegatecall.webp]]

// SPDX-License-Identifier: GPL-3.0-or-later
pragma solidity 0.8.24;

import "forge-std/Test.sol";

contract Target {
    address public owner;
    uint256 public value;

    function setOwnerAndValue(uint256 valueArg) public {
        owner = msg.sender;
        value = valueArg;
    }
}

contract Caller {
    address public owner;
    uint256 public value;

    function callSetOwnerAndValue(address target, uint256 valueArg) public {
        (bool success, ) = target.call(abi.encodeWithSignature("setOwnerAndValue(uint256)", valueArg));
        require(success, "call failed");
    }

    function delegatecallSetOwnerAndValue(address target, uint256 valueArg) public {
        (bool success, ) = target.delegatecall(abi.encodeWithSignature("setOwnerAndValue(uint256)", valueArg));
        require(success, "delegatecall failed");
    }
}

contract MyTest is Test {
    address sender = makeAddr("sender");
    Target target;
    Caller caller;

    function setUp() public {
        target = new Target();
        caller = new Caller();

        assertEq(target.owner(), address(0));
        assertEq(target.value(), 0);
        assertEq(caller.owner(), address(0));
        assertEq(caller.value(), 0);
    }

    function test_callSetOwnerAndValue() public {
        vm.prank(sender);
        caller.callSetOwnerAndValue(address(target), 100);

        // call modifies target contract's state, and target contract's msg.sender is caller contract
        assertEq(target.owner(), address(caller));
        assertEq(target.value(), 100);

        // caller contract's state didn't change
        assertEq(caller.owner(), address(0));
        assertEq(caller.value(), 0);
    }

    function test_delegatecallSetOwnerAndValue() public {
        vm.prank(sender);
        caller.delegatecallSetOwnerAndValue(address(target), 200);

        // target contract's state didn't change
        assertEq(target.owner(), address(0));
        assertEq(target.value(), 0);

        // delegatecall runs in the context of caller contract, so msg.sender is sender
        assertEq(caller.owner(), sender);
        assertEq(caller.value(), 200);
    }
}

ref:
https://medium.com/0xmantle/solidity-series-part-3-call-vs-delegatecall-8113b3c76855

Solidity: Multicall - Aggregate Multiple Contract Calls

Solidity: Multicall - Aggregate Multiple Contract Calls

There are different implementations of multicall:

In the following section, we will use Multicaller as an example to illustrate the process.

The main idea of Multicaller is to aggregate multiple contract function calls into a single one. It's usually to batch contract reads from off-chain apps. However, it could also be used to batch contract writes.

Multiple Contract Reads

import { defaultAbiCoder } from "ethers/lib/utils"

class Liquidator {
    async fetchIsLiquidatableResults(
        marketId: number,
        positions: Position[],
    ) {
        const price = await this.pythService.fetchPythOraclePrice(marketId)

        const targets = new Array(positions.length).fill(this.exchange.address)
        const data = positions.map(position =>
            this.exchange.interface.encodeFunctionData("isLiquidatable", [
                marketId,
                position.account,
                price,
            ]),
        )
        const values = new Array(accountMarkets.length).fill(0)

        return await this.multicaller.callStatic.aggregate(targets, data, values)
    }

    async start() {
        const positions = await this.fetchPositions(marketId)
        const results = await this.fetchIsLiquidatableResults(marketId, positions)

        for (const [i, result] of results.entries()) {
            const isLiquidatable = defaultAbiCoder.decode(["bool"], result)[0]
            const position = positions[i]
            console.log(`${position.account} isLiquidatable: ${isLiquidatable}`)
        }
    }
}

ref:
https://github.com/Vectorized/multicaller/blob/main/API.md#aggregate

Multiple Contract Writes

It requires the target contract is compatible with Multicaller if the target contract needs to read msg.sender.

// SPDX-License-Identifier: MIT
pragma solidity >=0.8.0;

import { LibMulticaller } from "multicaller/LibMulticaller.sol";

contract MulticallerSenderCompatible {
    function _sender() internal view virtual returns (address) {
        return LibMulticaller.sender();
    }
}

contract Exchange is MulticallerSenderCompatible {
    function openPosition(OpenPositionParams calldata params) external returns (int256, int256) {
        address taker = _sender();
        return _openPositionFor(taker, params);
    }
}
class Bot {
    async openPosition() {
        const targets = [
            this.oracleAdapter.address,
            this.exchange.address,
        ]
        const data = [
            this.oracleAdapter.interface.encodeFunctionData("updatePrice", [priceId, priceData]),
            this.exchange.interface.encodeFunctionData("openPosition", [params]),
        ]
        const values = [
            BigNumber.from(0),
            BigNumber.from(0),
        ]

        // update oracle price first, then open position
        const tx = await this.multicaller.connect(taker).aggregateWithSender(targets, data, values)
        await tx.wait()
    }
}

ref:
https://github.com/Vectorized/multicaller/blob/main/API.md#aggregatewithsender

Solidity: abi.encode() vs abi.encodePacked() vs abi.encodeWithSignature() vs abi.encodeCall()

Solidity: abi.encode() vs abi.encodePacked() vs abi.encodeWithSignature() vs abi.encodeCall()

There are some encode/decode functions in Solidity, for instance:

  • abi.encode() will concatenate all values and add padding to fit into 32 bytes for each values.
    • To integrate with other contracts, you should use abi.encode().
  • abi.encodePacked() will concatenate all values in the exact byte representations without padding.
    • If you only need to store it, you should use abi.encodePacked() since it's smaller.
  • abi.encodeWithSignature() is mainly used to call functions in another contract.
  • abi.encodeCall() is the type-safe version of abi.encodeWithSignature(), required 0.8.11+.
pragma solidity >=0.8.19;

import { IERC20 } from "openzeppelin-contracts/contracts/token/ERC20/IERC20.sol";
import "forge-std/Test.sol";

contract MyTest is Test {
    function test_abi_encode() public {
        bytes memory result = abi.encode(uint8(1), uint16(2), uint24(3));
        console.logBytes(result);
        // 0x000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000003
        // total 32 bytes * 3 = 96 bytes
    }

    function test_abi_encodePacked() public {
        bytes memory resultPacked = abi.encodePacked(uint8(1), uint16(2), uint24(3));
        console.logBytes(resultPacked);
        // 0x010002000003
        // total 1 byte + 2 bytes + 3 bytes = 6 bytes
    }

    function test_abi_encodeWithSignature() public {
        address weth = 0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2;
        address vitalik = 0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045;
        bytes memory data = abi.encodeWithSignature("balanceOf(address)", vitalik);
        console.logBytes(data);
        (bool success, bytes memory result) = weth.call(data);
        console.logBool(success);
        console.logUint(abi.decode(result, (uint256)));
    }

    function test_abi_encodeCall() public {
        address weth = 0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2;
        address vitalik = 0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045;
        bytes memory data = abi.encodeCall(IERC20.balanceOf, (vitalik));
        console.logBytes(data);
        (bool success, bytes memory result) = weth.call(data);
        console.logBool(success);
        console.logUint(abi.decode(result, (uint256)));
    }
}
forge test --mc "MyTest" -vv --fork-url https://rpc.flashbots.net

ref:
https://github.com/AmazingAng/WTF-Solidity/tree/main/27_ABIEncode
https://trustchain.medium.com/abi-functions-explained-in-solidity-bd93cf88bdf2