Solidity: CREATE vs CREATE2

Solidity: CREATE vs CREATE2

In Solidity, there are two opcodes for creating contracts: CREATE and CREATE2. Also, the deployed contract address can be precomputed via:

  • keccak256(deployerAddress, deployerNonce) if you're using CREATE opcode
  • keccak256(0xFF, deployerAddress, salt, bytecode) if you're using CREATE2 opcode

ref:
https://ethereum.stackexchange.com/questions/101336/what-is-the-benefit-of-using-create2-to-create-a-smart-contract

CREATE

Default opcode used when deploying smart contracts. If you're deploying a contract using new YourContract() without salt, then you're using CREATE.

The following code written in TypeScript and ethers.js, shows how to deploy a contract using CREATE under the hood:

import { ethers } from 'ethers';

const deployer = await ethers.getNamedSigner("deployer")
const nonce = await deployer.getTransactionCount()
const computedAddress = ethers.utils.getContractAddress({
    from: deployer.address,
    nonce: nonce,
})
console.log(`computed address: ${computedAddress}`)

const ktbArbitrageurFactory = await ethers.getContractFactory("KtbArbitrageur", deployer)
const ktbArbitrageur = await ktbArbitrageurFactory.deploy(oinchAggregationRouterV5)
console.log(`deployed address: ${ktbArbitrageur.address}`)

Though it's pretty inefficient, but you can specify the deployed address (to some extend) by keeping increasing nonce until it meets some conditions you set:

async increaseNonceToDeployUpgradeable(condition: string, targetAddr: string) {
    const { ethers } = this._hre
    const deployer = await ethers.getNamedSigner("deployer")

    // We use deployer's address and nonce to compute a contract's address which deployed with that nonce,
    // to find the nonce that matches the condition
    let nonce = await deployer.getTransactionCount()
    console.log(`Next nonce: ${nonce}`)

    let computedAddress = "0x0"
    let count = 0
    while (
        count < 2 ||
        (condition == "GREATER_THAN"
            ? computedAddress.toLowerCase() <= targetAddr.toLowerCase()
            : computedAddress.toLowerCase() >= targetAddr.toLowerCase())
    ) {
        // Increase the nonce until we find a contract address that matches the condition
        computedAddress = ethers.utils.getContractAddress({
            from: deployer.address,
            nonce: nonce,
        })
        console.log(`Computed address: ${nonce}, ${computedAddress}`)
        nonce += 1
        count += 1
    }

    // When deploying a upgradable contract,
    // it will deploy the implementation contract first, then deploy the proxy
    // so we need to increase the nonce to "the expected nonce - 1"
    let nextNonce = await deployer.getTransactionCount()
    for (let i = 0; i < count - 1 - 1; i++) {
        nextNonce += 1
        console.log(`Increasing nonce to ${nextNonce}`)
        const tx = await deployer.sendTransaction({
            to: deployer.address,
            value: ethers.utils.parseEther("0"),
        })
        await tx.wait()
    }

    console.log(`Finalized nonce`)
}

ref:
https://docs.ethers.org/v5/api/utils/address/#utils-getContractAddress

CREATE2

The Solidity code below demonstrates how to deploy a contract using CREATE2 which is introduced in EIP-1014 to provide more flexible and predictable address generation:

bytes32 salt = bytes32("perp");
address oinchAggregationRouterV5 = 0x1111111254EEB25477B68fb85Ed929f73A960582;

address computedAddress = address(
    uint256(
        keccak256(
            abi.encodePacked(
                bytes1(0xff), // avoid conflict with CREATE
                address(this), // deployer
                salt,
                keccak256(
                    abi.encodePacked(
                        type(KtbArbitrageur).creationCode, // bytecode
                        abi.encode(oinchAggregationRouterV5) // constructor parameter
                    )
                )
            )
        )
    )
);

KtbArbitrageur ktbArbitrageur = new KtbArbitrageur{ salt: salt }(oinchAggregationRouterV5);
console.logAddress(address(ktbArbitrageur));
console.logAddress(computedAddress);

You can change salt to an arbitrary value to produce different contract addresses.

ref:
https://docs.soliditylang.org/en/v0.7.6/control-structures.html#salted-contract-creations-create2

Solidity: calldata, memory, and storage

Solidity: calldata, memory, and storage

Variables

There are three types of variables in Solidity:

  • Global variables
    • Provide information about the blockchain
    • For example, block.number, block.timestamp, or msg.sender
  • State variables
    • Declared outside a function
    • Stored on the blockchain
    • Also called "storage"
  • Local variables
    • Declared inside a function
    • Not stored on the blockchain, stored in memory instead
    • Erased between function calls

References:
https://docs.soliditylang.org/en/v0.7.6/units-and-global-variables.html
https://solidity-by-example.org/variables

Data Types

There are two types of data:

  • Value types: YourContract, address, bool, uint256, int256, enum, and bytes32 (fixed-size byte arrays)
  • Reference types: array, mapping, and struct

It's worth noting that bytes and string are dynamically-sized byte arrays and are considered reference types. However, byte (bytes1), bytes2, ..., bytes32 are value types since they're fixed-size byte arrays.

References:
https://docs.soliditylang.org/en/v0.7.6/types.html#value-types
https://docs.soliditylang.org/en/v0.7.6/types.html#reference-types

Data Location

When using a reference type, you must explicitly provide the data location where the type is stored. There are three data locations:

  • storage is where the state variables are stored, and its lifetime is the lifetime of the contract
  • memory means variables are temporary and erased between external function calls
  • calldata behaves mostly like memory, but is immutable

For reference types in function arguments, you must declare them as memory or calldata.

If possible, use calldata as the data location because it avoids copying, reduces gas usage, and ensures that the data cannot be modified. Arrays and structs with calldata data location can also be returned from functions, but it is not possible to allocate such types.

References:
https://docs.soliditylang.org/en/v0.7.6/types.html#data-location
https://medium.com/coinmonks/solidity-storage-vs-memory-vs-calldata-8c7e8c38bce
https://gist.github.com/hrkrshnn/ee8fabd532058307229d65dcd5836ddc

It is also a best practice to use external if you expect that the function will only ever be called externally, and use public if you need to call the function internally. The difference between both is that public function arguments are copied to memory, while in external functions, arguments are read directly from calldata, which is cheaper than memory allocation. external functions are sometimes more efficient when they receive large arrays.

References:
https://medium.com/newcryptoblock/best-practices-in-solidity-b324b65d33b1

Assignment Behaviour

  • Assignments between storage and memory (or from calldata) always create an independent copy.
  • Assignments from memory to memory only create references.
  • Assignments from storage to a local storage variable also only assign a reference.
  • All other assignments to storage always copy.

References:
https://docs.soliditylang.org/en/v0.7.6/types.html#data-location-and-assignment-behaviour

hardhat-deploy: Upgradeable Contracts with Linked Libraries

hardhat-deploy: Upgradeable Contracts with Linked Libraries

Library

Assume ContractA imports LibraryA, when deploying ContractA, LibraryA is embedded into ContractA if LibraryA contains only internal functions.

If LibraryA contains at least one external function, LibraryA must be deployed first, and then linked when deploying ContractA.

ref:
https://solidity-by-example.org/library/
https://docs.soliditylang.org/en/v0.7.6/using-the-compiler.html

Foundry

In Foundry tests, Foundry will automatically deploy libraries if they have external functions, so you don't need to explicitly link them.

hardhat-deploy

Whenever the library is changed, hardhat-deploy will deploy a new implementation and upgrade the proxy:

import { DeployFunction } from "hardhat-deploy/dist/types"
import { QuoteVault } from "../typechain-types"

const func: DeployFunction = async function (hre) {
    const { deployments, ethers } = hre

    // deploy library
    await deployments.deploy("PerpFacade", {
        from: deployerAddress,
        contract: "contracts/lib/PerpFacade.sol:PerpFacade",
    })
    const perpFacadeDeployment = await deployments.get("PerpFacade")

    // deploy upgradeable contract
    await deployments.deploy("QuoteVault", {
        from: deployerAddress,
        contract: "contracts/QuoteVault.sol:QuoteVault",
        proxy: {
            owner: multisigOwnerAddress, // ProxyAdmin.owner
            proxyContract: "OpenZeppelinTransparentProxy",
            viaAdminContract: "DefaultProxyAdmin",
            execute: {
                init: {
                    methodName: "initialize",
                    args: [
                        "Kantaban USDC-ETH QuoteVault",
                        "kUSDC-ETH",
                        usdcDecimals,
                        USDC,
                        WETH,
                    ],
                },
            },
        },
        libraries: {
            PerpFacade: perpFacadeDeployment.address,
        },
    })
    const quoteVaultDeployment = await deployments.get("QuoteVault")

    // must specify library address when instantiating the contract:
    const quoteVaultFactory = await ethers.getContractFactory("contracts/QuoteVault.sol:QuoteVault", {
        libraries: {
            PerpFacade: perpFacadeDeployment.address,
        },
    })
    const quoteVault = quoteVaultFactory.attach(quoteVaultDeployment.address) as unknown as QuoteVault
    console.log(await quoteVault.decimals())
}

export default func

ref:
https://github.com/wighawag/hardhat-deploy#handling-contract-using-libraries

Deploy Ethereum RPC Provider Load Balancer with HAProxy in Kubernetes (AWS EKS)

Deploy Ethereum RPC Provider Load Balancer with HAProxy in Kubernetes (AWS EKS)

To achieve high availability and better performance, we could build a HAProxy load balancer in front of multiple Ethereum RPC providers, and also automatically adjust traffic weights based on the latency and block timestamp of each RPC endpoints.

ref:
https://www.haproxy.org/

Configurations

In haproxy.cfg, we have a backend named rpc-backend, and two RPC endpoints: quicknode and alchemy as upstream servers.

global
    log stdout format raw local0 info
    stats socket ipv4@*:9999 level admin expose-fd listeners
    stats timeout 5s

defaults
    log global
    mode http
    option httplog
    option dontlognull
    timeout connect 10s
    timeout client 60s
    timeout server 60s
    timeout http-request 60s

frontend stats
    bind *:8404
    stats enable
    stats uri /
    stats refresh 10s

frontend http
    bind *:8000
    option forwardfor
    default_backend rpc-backend

backend rpc-backend
    balance leastconn
    server quicknode 127.0.0.1:8001 weight 100
    server alchemy 127.0.0.1:8002 weight 100

frontend quicknode-frontend
    bind *:8001
    option dontlog-normal
    default_backend quicknode-backend

backend quicknode-backend
    balance roundrobin
    http-request set-header Host xxx.quiknode.pro
    http-request set-path /xxx
    server quicknode xxx.quiknode.pro:443 sni str(xxx.quiknode.pro) check-ssl ssl verify none

frontend alchemy-frontend
    bind *:8002
    option dontlog-normal
    default_backend alchemy-backend

backend alchemy-backend
    balance roundrobin
    http-request set-header Host xxx.alchemy.com
    http-request set-path /xxx
    server alchemy xxx.alchemy.com:443 sni str(xxx.alchemy.com) check-ssl ssl verify none

ref:
https://docs.haproxy.org/2.7/configuration.html
https://www.haproxy.com/documentation/hapee/latest/configuration/

Test it on local:

docker run --rm -v $PWD:/usr/local/etc/haproxy \
-p 8000:8000 \
-p 8404:8404 \
-p 9999:9999 \
-i -t --name haproxy haproxy:2.7.0

docker exec -i -t -u 0 haproxy bash

echo "show stat" | socat stdio TCP:127.0.0.1:9999
echo "set weight rpc-backend/quicknode 0" | socat stdio TCP:127.0.0.1:9999

# if you're using a socket file descriptor
apt update
apt install socat -y
echo "set weight rpc-backend/alchemy 0" | socat stdio /var/lib/haproxy/haproxy.sock

ref:
https://www.redhat.com/sysadmin/getting-started-socat

Healtchcheck

Then the important part: we're going to run a simple but flexible healthcheck script, called node weighter, as a sidecar container. So the healthcheck script can access HAProxy admin socket of the HAProxy container through 127.0.0.1:9999.

The node weighter can be written in any language. Here is a TypeScript example:

in HAProxyConnector.ts which sets weights through HAProxy admin socket:

import net from "net"

export interface ServerWeight {
    backendName: string
    serverName: string
    weight: number
}

export class HAProxyConnector {
    constructor(readonly adminHost = "127.0.0.1", readonly adminPort = 9999) {}

    setWeights(serverWeights: ServerWeight[]) {
        const scaledServerWeights = this.scaleWeights(serverWeights)

        const commands = scaledServerWeights.map(server => {
            return `set weight ${server.backendName}/${server.serverName} ${server.weight}\n`
        })

        const client = net.createConnection({ host: this.adminHost, port: this.adminPort }, () => {
            console.log("HAProxyAdminSocketConnected")
        })
        client.on("error", err => {
            console.log("HAProxyAdminSocketError")
        })
        client.on("data", data => {
            console.log("HAProxyAdminSocketData")
            console.log(data.toString().trim())
        })

        client.write(commands.join(""))
    }

    private scaleWeights(serverWeights: ServerWeight[]) {
        const totalWeight = sum(serverWeights.map(server => server.weight))

        return serverWeights.map(server => {
            server.weight = Math.floor((server.weight / totalWeight) * 256)
            return server
        })
    }
}

in RPCProxyWeighter.ts which calculates weights based a custom healthcheck logic:

import { HAProxyConnector } from "./connectors/HAProxyConnector"
import config from "./config.json"

export interface Server {
    backendName: string
    serverName: string
    serverUrl: string
}

export interface ServerWithWeight {
    backendName: string
    serverName: string
    weight: number
    [metadata: string]: any
}

export class RPCProxyWeighter {
    protected readonly log = Log.getLogger(RPCProxyWeighter.name)
    protected readonly connector: HAProxyConnector

    protected readonly ADJUST_INTERVAL_SEC = 60 // 60 seconds
    protected readonly MAX_BLOCK_TIMESTAMP_DELAY_MSEC = 150 * 1000 // 150 seconds
    protected readonly MAX_LATENCY_MSEC = 3 * 1000 // 3 seconds
    protected shouldScale = false
    protected totalWeight = 0

    constructor() {
        this.connector = new HAProxyConnector(config.admin.host, config.admin.port)
    }

    async start() {
        while (true) {
            let serverWithWeights = await this.calculateWeights(config.servers)
            if (this.shouldScale) {
                serverWithWeights = this.connector.scaleWeights(serverWithWeights)
            }
            this.connector.setWeights(serverWithWeights)

            await sleep(1000 * this.ADJUST_INTERVAL_SEC)
        }
    }

    async calculateWeights(servers: Server[]) {
        this.totalWeight = 0

        const serverWithWeights = await Promise.all(
            servers.map(async server => {
                try {
                    return await this.calculateWeight(server)
                } catch (err: any) {
                    return {
                        backendName: server.backendName,
                        serverName: server.serverName,
                        weight: 0,
                    }
                }
            }),
        )

        // if all endpoints are unhealthy, overwrite weights to 100
        if (this.totalWeight === 0) {
            for (const server of serverWithWeights) {
                server.weight = 100
            }
        }

        return serverWithWeights
    }

    async calculateWeight(server: Server) {
        const healthInfo = await this.getHealthInfo(server.serverUrl)

        const serverWithWeight: ServerWithWeight = {
            ...{
                backendName: server.backendName,
                serverName: server.serverName,
                weight: 0,
            },
            ...healthInfo,
        }

        if (healthInfo.isBlockTooOld || healthInfo.isLatencyTooHigh) {
            return serverWithWeight
        }

        // normalizedLatency: the lower the better
        // blockTimestampDelayMsec: the lower the better
        // both units are milliseconds at the same scale
        // serverWithWeight.weight = 1 / healthInfo.normalizedLatency + 1 / healthInfo.blockTimestampDelayMsec

        // NOTE: if we're using `balance source` in HAProxy, the weight can only be 100% or 0%,
        // therefore, as long as the RPC endpoint is healthy, we always set the same weight
        serverWithWeight.weight = 100

        this.totalWeight += serverWithWeight.weight

        return serverWithWeight
    }

    protected async getHealthInfo(serverUrl: string): Promise<HealthInfo> {
        const provider = new ethers.providers.StaticJsonRpcProvider(serverUrl)

        // TODO: add timeout
        const start = Date.now()
        const blockNumber = await provider.getBlockNumber()
        const end = Date.now()

        const block = await provider.getBlock(blockNumber)

        const blockTimestamp = block.timestamp
        const blockTimestampDelayMsec = Math.floor(Date.now() / 1000 - blockTimestamp) * 1000
        const isBlockTooOld = blockTimestampDelayMsec >= this.MAX_BLOCK_TIMESTAMP_DELAY_MSEC

        const latency = end - start
        const normalizedLatency = this.normalizeLatency(latency)
        const isLatencyTooHigh = latency >= this.MAX_LATENCY_MSEC

        return {
            blockNumber,
            blockTimestamp,
            blockTimestampDelayMsec,
            isBlockTooOld,
            latency,
            normalizedLatency,
            isLatencyTooHigh,
        }
    }

    protected normalizeLatency(latency: number) {
        if (latency <= 40) {
            return 1
        }

        const digits = Math.floor(latency).toString().length
        const base = Math.pow(10, digits - 1)
        return Math.floor(latency / base) * base
    }
}

in config.json:

Technically, we don't need this config file. Instead, we could read the actual URLs from HAProxy admin socket directly. Though creating a JSON file that contains URLs is much simpler.

{
    "admin": {
        "host": "127.0.0.1",
        "port": 9999
    },
    "servers": [
        {
            "backendName": "rpc-backend",
            "serverName": "quicknode",
            "serverUrl": "https://xxx.quiknode.pro/xxx"
        },
        {
            "backendName": "rpc-backend",
            "serverName": "alchemy",
            "serverUrl": "https://xxx.alchemy.com/xxx"
        }
    ]
}

ref:
https://www.haproxy.com/documentation/hapee/latest/api/runtime-api/set-weight/
https://sleeplessbeastie.eu/2020/01/29/how-to-use-haproxy-stats-socket/

Deployments

apiVersion: v1
kind: ConfigMap
metadata:
  name: rpc-proxy-config-file
data:
  haproxy.cfg: |
    ...
  config.json: |
    ...
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: rpc-proxy
spec:
  replicas: 2
  selector:
    matchLabels:
      app: rpc-proxy
  template:
    metadata:
      labels:
        app: rpc-proxy
    spec:
      volumes:
        - name: rpc-proxy-config-file
          configMap:
            name: rpc-proxy-config-file
      containers:
        - name: haproxy
          image: haproxy:2.7.0
          ports:
            - containerPort: 8000
              protocol: TCP
          resources:
            requests:
              cpu: 200m
              memory: 256Mi
            limits:
              cpu: 1000m
              memory: 256Mi
          volumeMounts:
            - name: rpc-proxy-config-file
              subPath: haproxy.cfg
              mountPath: /usr/local/etc/haproxy/haproxy.cfg
              readOnly: true
        - name: node-weighter
          image: your-node-weighter
          command: ["node", "./index.js"]
          resources:
            requests:
              cpu: 200m
              memory: 256Mi
            limits:
              cpu: 1000m
              memory: 256Mi
          volumeMounts:
            - name: rpc-proxy-config-file
              subPath: config.json
              mountPath: /path/to/build/config.json
              readOnly: true
---
apiVersion: v1
kind: Service
metadata:
  name: rpc-proxy
spec:
  clusterIP: None
  selector:
    app: rpc-proxy
  ports:
    - name: http
      port: 8000
      targetPort: 8000

The RPC load balancer can then be accessed through http://rpc-proxy.default.svc.cluster.local:8000 inside the Kubernetes cluster.

ref:
https://www.containiq.com/post/kubernetes-sidecar-container
https://hub.docker.com/_/haproxy

Deploy graph-node (The Graph) in Kubernetes (AWS EKS)

Deploy graph-node (The Graph) in Kubernetes (AWS EKS)

graph-node is an open source software that indexes blockchain data, as known as indexer. Though the cost of running a self-hosted graph node could be pretty high. We're going to deploy a self-hosted graph node on Amazon Elastic Kubernetes Service (EKS).

In this article, we have two approaches to deploy self-hosted graph node:

  1. A single graph node with a single PostgreSQL database
  2. A graph node cluster with a primary-secondary PostgreSQL cluster
    • A graph node cluster consists of one index node and multiple query nodes
    • For the database, we will simply use a Multi-AZ DB cluster on AWS RDS

ref:
https://github.com/graphprotocol/graph-node

Create a PostgreSQL Database on AWS RDS

Hardware requirements for running a graph-node:
https://thegraph.com/docs/en/network/indexing/#what-are-the-hardware-requirements
https://docs.thegraph.academy/official-docs/indexer/testnet/graph-protocol-testnet-baremetal/1_architectureconsiderations

A Single DB Instance

We use the following settings on staging.

  • Version: PostgreSQL 13.7-R1
  • Template: Dev/Test
  • Deployment option: Single DB instance
  • DB instance identifier: graph-node
  • Master username: graph_node
  • Auto generate a password: Yes
  • DB instance class: db.t3.medium (2 vCPU 4G RAM)
  • Storage type: gp2
  • Allocated storage: 500 GB
  • Enable storage autoscaling: No
  • Compute resource: Don’t connect to an EC2 compute resource
  • Network type: IPv4
  • VPC: eksctl-perp-staging-cluster/VPC
  • DB Subnet group: default-vpc
  • Public access: Yes
  • VPC security group: graph-node
  • Availability Zone: ap-northeast-1d
  • Initial database name: graph_node

A Multi-AZ DB Cluster

We use the following settings on production.

  • Version: PostgreSQL 13.7-R1
  • Template: Production
  • Deployment option: Multi-AZ DB Cluster
  • DB instance identifier: graph-node-cluster
  • Master username: graph_node
  • Auto generate a password: Yes
  • DB instance class: db.m6gd.2xlarge (8 vCPU 32G RAM)
  • Storage type: io1
  • Allocated storage: 500 GB
  • Provisioned IOPS: 1000
  • Enable storage autoscaling: No
  • Compute resource: Don’t connect to an EC2 compute resource
  • VPC: eksctl-perp-production-cluster/VPC
  • DB Subnet group: default-vpc
  • Public access: Yes
  • VPC security group: graph-node

Unfortunately, AWS currently do not have Reserved Instances (RIs) Plan for Multi-AZ DB clusters. Use "Multi-AZ DB instance" or "Single DB instance" instead if the cost is a big concern to you.

RDS Remote Access

You could test your database remote access. Also, make sure the security group's inbound rules include 5432 port for PostgreSQL.

brew install postgresql

psql --host=YOUR_RDB_ENDPOINT --port=5432 --username=graph_node --password --dbname=postgres
# or
createdb -h YOUR_RDB_ENDPOINT -p 5432 -U graph_node graph_node

Create a Dedicated EKS Node Group

This step is optional.

eksctl --profile=perp create nodegroup \
--cluster perp-production \
--region ap-southeast-1 \
--name "managed-graph-node-m5-xlarge" \
--node-type "m5.xlarge" \
--nodes 3 \
--nodes-min 3 \
--nodes-max 3 \
--managed \
--asg-access \
--alb-ingress-access

Deploy graph-node in Kubernetes

Deployments for a Single DB Instance

apiVersion: v1
kind: Service
metadata:
  name: graph-node
spec:
  clusterIP: None
  selector:
    app: graph-node
  ports:
    - name: jsonrpc
      port: 8000
      targetPort: 8000
    - name: websocket
      port: 8001
      targetPort: 8001
    - name: admin
      port: 8020
      targetPort: 8020
    - name: index-node
      port: 8030
      targetPort: 8030
    - name: metrics
      port: 8040
      targetPort: 8040
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: graph-node-config
data:
  # https://github.com/graphprotocol/graph-node/blob/master/docs/environment-variables.md
  GRAPH_LOG: info
  EXPERIMENTAL_SUBGRAPH_VERSION_SWITCHING_MODE: synced
---
apiVersion: v1
kind: Secret
metadata:
  name: graph-node-secret
type: Opaque
data:
  postgres_host: xxx
  postgres_user: xxx
  postgres_db: xxx
  postgres_pass: xxx
  ipfs: https://ipfs.network.thegraph.com
  ethereum: optimism:https://YOUR_RPC_ENDPOINT_1 optimism:https://YOUR_RPC_ENDPOINT_2
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: graph-node
spec:
  replicas: 1
  selector:
    matchLabels:
      app: graph-node
  serviceName: graph-node
  template:
    metadata:
      labels:
        app: graph-node
      annotations:
        "cluster-autoscaler.kubernetes.io/safe-to-evict": "false"
    spec:
      containers:
        # https://github.com/graphprotocol/graph-node/releases
        # https://hub.docker.com/r/graphprotocol/graph-node
        - name: graph-node
          image: graphprotocol/graph-node:v0.29.0
          envFrom:
            - secretRef:
                name: graph-node-secret
            - configMapRef:
                name: graph-node-config
          ports:
            - containerPort: 8000
              protocol: TCP
            - containerPort: 8001
              protocol: TCP
            - containerPort: 8020
              protocol: TCP
            - containerPort: 8030
              protocol: TCP
            - containerPort: 8040
              protocol: TCP
          resources:
            requests:
              cpu: 2000m
              memory: 4G
            limits:
              cpu: 4000m
              memory: 4G

Since Kubernetes Secrets are, by default, stored unencrypted in the API server's underlying data store (etcd). Anyone with API access can retrieve or modify a Secret. They're not secret at all. So instead of storing sensitive data in Secrets, you might want to use Secrets Store CSI Driver.

ref:
https://github.com/graphprotocol/graph-node/blob/master/docs/environment-variables.md
https://hub.docker.com/r/graphprotocol/graph-node

Deployments for a Multi-AZ DB Cluster

There are two types of nodes in a graph node cluster:

  • Index Node: Only indexing data from the blockchain, not serving queries at all
  • Query Node: Only serving GraphQL queries, not indexing data at all

Indexing subgraphs doesn't require too much CPU and memory resources, but serving queries does, especially when you enable GraphQL caching.

Index Node

Technically, we could further split an index node into Ingestor and Indexer: the former fetches blockchain data from RPC providers periodically, and the latter indexes entities based on mappings. That's another story though.

apiVersion: v1
kind: ConfigMap
metadata:
  name: graph-node-cluster-index-config-file
data:
  config.toml: |
    [store]
    [store.primary]
    connection = "postgresql://USER:PASSWORD@RDS_WRITER_ENDPOINT/DB"
    weight = 1
    pool_size = 2

    [chains]
    ingestor = "graph-node-cluster-index-0"
    [chains.optimism]
    shard = "primary"
    provider = [
      { label = "optimism-rpc-proxy", url = "http://rpc-proxy-for-graph-node.default.svc.cluster.local:8000", features = ["archive"] }
      # { label = "optimism-quicknode", url = "https://YOUR_RPC_ENDPOINT_1", features = ["archive"] },
      # { label = "optimism-alchemy", url = "https://YOUR_RPC_ENDPOINT_2", features = ["archive"] },
      # { label = "optimism-infura", url = "https://YOUR_RPC_ENDPOINT_3", features = ["archive"] }
    ]

    [deployment]
    [[deployment.rule]]
    indexers = [ "graph-node-cluster-index-0" ]
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: graph-node-cluster-index-config-env
data:
  # https://github.com/graphprotocol/graph-node/blob/master/docs/environment-variables.md
  GRAPH_LOG: "info"
  GRAPH_KILL_IF_UNRESPONSIVE: "true"
  ETHEREUM_POLLING_INTERVAL: "100"
  ETHEREUM_BLOCK_BATCH_SIZE: "10"
  GRAPH_STORE_WRITE_QUEUE: "50"
  EXPERIMENTAL_SUBGRAPH_VERSION_SWITCHING_MODE: "synced"
---
apiVersion: v1
kind: Service
metadata:
  name: graph-node-cluster-index
spec:
  clusterIP: None
  selector:
    app: graph-node-cluster-index
  ports:
    - name: jsonrpc
      port: 8000
      targetPort: 8000
    - name: websocket
      port: 8001
      targetPort: 8001
    - name: admin
      port: 8020
      targetPort: 8020
    - name: index-node
      port: 8030
      targetPort: 8030
    - name: metrics
      port: 8040
      targetPort: 8040
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: graph-node-cluster-index
spec:
  replicas: 1
  selector:
    matchLabels:
      app: graph-node-cluster-index
  serviceName: graph-node-cluster-index
  template:
    metadata:
      labels:
        app: graph-node-cluster-index
      annotations:
        "cluster-autoscaler.kubernetes.io/safe-to-evict": "false"
    spec:
      terminationGracePeriodSeconds: 10
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: alpha.eksctl.io/nodegroup-name
                    operator: In
                    values:
                      - managed-graph-node-m5-xlarge
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              preference:
                matchExpressions:
                  - key: topology.kubernetes.io/zone
                    operator: In
                    values:
                      # should schedule index node to the zone that writer instance is located
                      - ap-southeast-1a
      volumes:
        - name: graph-node-cluster-index-config-file
          configMap:
            name: graph-node-cluster-index-config-file
      containers:
        # https://github.com/graphprotocol/graph-node/releases
        # https://hub.docker.com/r/graphprotocol/graph-node
        - name: graph-node-cluster-index
          image: graphprotocol/graph-node:v0.29.0
          command:
            [
              "/bin/sh",
              "-c",
              'graph-node --node-id $HOSTNAME --config "/config.toml" --ipfs "https://ipfs.network.thegraph.com"',
            ]
          envFrom:
            - configMapRef:
                name: graph-node-cluster-index-config-env
          ports:
            - containerPort: 8000
              protocol: TCP
            - containerPort: 8001
              protocol: TCP
            - containerPort: 8020
              protocol: TCP
            - containerPort: 8030
              protocol: TCP
            - containerPort: 8040
              protocol: TCP
          resources:
            requests:
              cpu: 1000m
              memory: 1G
            limits:
              cpu: 2000m
              memory: 1G
          volumeMounts:
            - name: graph-node-cluster-index-config-file
              subPath: config.toml
              mountPath: /config.toml
              readOnly: true

ref:
https://github.com/graphprotocol/graph-node/blob/master/docs/config.md
https://github.com/graphprotocol/graph-node/blob/master/docs/environment-variables.md

The key factors to the efficiency of syncing/indexing subgraphs are:

  1. The latency of the RPC provider
  2. The write thoughput of the database

I didn't find any graph-node configs or environment variables that can speed up the syncing process observably. If you know, please tell me.

ref:
https://github.com/graphprotocol/graph-node/issues/3756

If you're interested in building a RPC proxy with healthcheck of block number and latency, see Deploy Ethereum RPC Provider Load Balancer with HAProxy in Kubernetes (AWS EKS). graph-node itself cannot detect if the RPC provider's block delays.

Query Nodes

The most important config is DISABLE_BLOCK_INGESTOR: "true" which basically configures the node as a query node.

apiVersion: v1
kind: ConfigMap
metadata:
  name: graph-node-cluster-query-config-env
data:
  # https://github.com/graphprotocol/graph-node/blob/master/docs/environment-variables.md
  DISABLE_BLOCK_INGESTOR: "true" # this node won't ingest blockchain data
  GRAPH_LOG_QUERY_TIMING: "gql"
  GRAPH_GRAPHQL_QUERY_TIMEOUT: "600"
  GRAPH_QUERY_CACHE_BLOCKS: "60"
  GRAPH_QUERY_CACHE_MAX_MEM: "2000" # the actual used memory could be 3x
  GRAPH_QUERY_CACHE_STALE_PERIOD: "100"
  EXPERIMENTAL_SUBGRAPH_VERSION_SWITCHING_MODE: "synced"
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: graph-node-cluster-query-config-file
data:
  config.toml: |
    [store]
    [store.primary]
    connection = "postgresql://USER:PASSWORD@RDS_WRITER_ENDPOINT/DB" # connect to RDS writer instance
    weight = 0
    pool_size = 2
    [store.primary.replicas.repl1]
    connection = "postgresql://USER:PASSWORD@RDS_READER_ENDPOINT/DB" # connect to RDS reader instances (multiple)
    weight = 1
    pool_size = 100

    [chains]
    ingestor = "graph-node-cluster-index-0"

    [deployment]
    [[deployment.rule]]
    indexers = [ "graph-node-cluster-index-0" ]

    [general]
    query = "graph-node-cluster-query*"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: graph-node-cluster-query
spec:
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
  minReadySeconds: 5
  selector:
    matchLabels:
      app: graph-node-cluster-query
  template:
    metadata:
      labels:
        app: graph-node-cluster-query
    spec:
      terminationGracePeriodSeconds: 40
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: alpha.eksctl.io/nodegroup-name
                    operator: In
                    values:
                      - managed-graph-node-m5-xlarge
      volumes:
        - name: graph-node-cluster-query-config-file
          configMap:
            name: graph-node-cluster-query-config-file
      containers:
        # https://github.com/graphprotocol/graph-node/releases
        # https://hub.docker.com/r/graphprotocol/graph-node
        - name: graph-node-cluster-query
          image: graphprotocol/graph-node:v0.29.0
          command:
            [
              "/bin/sh",
              "-c",
              'graph-node --node-id $HOSTNAME --config "/config.toml" --ipfs "https://ipfs.network.thegraph.com"',
            ]
          envFrom:
            - configMapRef:
                name: graph-node-cluster-query-config-env
          ports:
            - containerPort: 8000
              protocol: TCP
            - containerPort: 8001
              protocol: TCP
            - containerPort: 8030
              protocol: TCP
          resources:
            requests:
              cpu: 1000m
              memory: 8G
            limits:
              cpu: 2000m
              memory: 8G
          volumeMounts:
            - name: graph-node-cluster-query-config-file
              subPath: config.toml
              mountPath: /config.toml
              readOnly: true

ref:
https://github.com/graphprotocol/graph-node/blob/master/docs/config.md
https://github.com/graphprotocol/graph-node/blob/master/docs/environment-variables.md

It's also strongly recommended to mark subgraph schemas as immutable with @entity(immutable: true). Immutable entities are much faster to write and to query, so should be used whenever possible. The query time reduces by 80% in our case.

ref:
https://thegraph.com/docs/en/developing/creating-a-subgraph/#defining-entities

Setup an Ingress for graph-node

WebSocket connections are inherently sticky. If the client requests a connection upgrade to WebSockets, the target that returns an HTTP 101 status code to accept the connection upgrade is the target used in the WebSockets connection. After the WebSockets upgrade is complete, cookie-based stickiness is not used. You don't need to enable stickiness for ALB.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: graph-node-ingress
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-FS-1-2-Res-2020-10
    alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:XXX
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=false
    alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=3600,deletion_protection.enabled=true
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]'
    alb.ingress.kubernetes.io/actions.subgraph-api: >
      {"type": "forward", "forwardConfig": {"targetGroups": [
        {"serviceName": "graph-node-cluster-query", "servicePort": 8000, "weight": 100}
      ]}}
    alb.ingress.kubernetes.io/actions.subgraph-ws: >
      {"type": "forward", "forwardConfig": {"targetGroups": [
        {"serviceName": "graph-node-cluster-query", "servicePort": 8001, "weight": 100}
      ]}}
    alb.ingress.kubernetes.io/actions.subgraph-hc: >
      {"type": "forward", "forwardConfig": {"targetGroups": [
        {"serviceName": "graph-node-cluster-query", "servicePort": 8030, "weight": 100}
      ]}}
spec:
  rules:
    - host: "subgraph-api.example.com"
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: subgraph-api
                port:
                  name: use-annotation
    - host: "subgraph-ws.example.com"
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: subgraph-ws
                port:
                  name: use-annotation
    - host: "subgraph-hc.example.com"
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: subgraph-hc
                port:
                  name: use-annotation

Deploy a Subgraph

kubectl port-forward service/graph-node-cluster-index 8020:8020
npx graph create your_org/your_subgraph --node http://127.0.0.1:8020
graph deploy --node http://127.0.0.1:8020 your_org/your_subgraph

ref:
https://github.com/graphprotocol/graph-cli

Here are also some useful commands for maintenance tasks:

kubectl logs -l app=graph-node-cluster-query -f | grep "query_time_ms"

# show total pool connections according to config file
graphman --config config.toml config pools graph-node-cluster-query-zone-a-78b467665d-tqw7g

# show info
graphman --config config.toml info QmbCm38nL6C7v3FS5snzjnQyDMvV8FkM72C3o5RCfGpM9P

# reassign a deployment to a index node
graphman --config config.toml reassign QmbCm38nL6C7v3FS5snzjnQyDMvV8FkM72C3o5RCfGpM9P graph-node-cluster-index-pending-0
graphman --config config.toml reassign QmbCm38nL6C7v3FS5snzjnQyDMvV8FkM72C3o5RCfGpM9P graph-node-cluster-index-0

# stop indexing a deployment
graphman --config config.toml unassign QmbCm38nL6C7v3FS5snzjnQyDMvV8FkM72C3o5RCfGpM9P

# remove unused deployments
graphman --config config.toml unused record
graphman --config config.toml unused remove -d QmdoXB4zJVD5n38omVezMaAGEwsubhKdivgUmpBCUxXKDh

ref:
https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md