{"id":848,"date":"2023-01-15T21:34:54","date_gmt":"2023-01-15T13:34:54","guid":{"rendered":"https:\/\/vinta.ws\/code\/?p=848"},"modified":"2026-03-17T00:42:04","modified_gmt":"2026-03-16T16:42:04","slug":"deploy-ethereum-rpc-provider-load-balancer-with-haproxy-in-kubernetes-aws-eks","status":"publish","type":"post","link":"https:\/\/vinta.ws\/code\/deploy-ethereum-rpc-provider-load-balancer-with-haproxy-in-kubernetes-aws-eks.html","title":{"rendered":"Deploy Ethereum RPC Provider Load Balancer with HAProxy in Kubernetes (AWS EKS)"},"content":{"rendered":"<p>To achieve high availability and better performance, we could build a HAProxy load balancer in front of multiple Ethereum RPC providers, and also automatically adjust traffic weights based on the latency and block timestamp of each RPC endpoint.<\/p>\n<p>ref:<br \/>\n<a href=\"https:\/\/www.haproxy.org\/\">https:\/\/www.haproxy.org\/<\/a><\/p>\n<h2>Configurations<\/h2>\n<p>In <code>haproxy.cfg<\/code>, we have a backend named <code>rpc-backend<\/code>, and two RPC endpoints: <code>quicknode<\/code> and <code>alchemy<\/code> as upstream servers.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-cfg\">global\n    log stdout format raw local0 info\n    stats socket ipv4@*:9999 level admin expose-fd listeners\n    stats timeout 5s\n\ndefaults\n    log global\n    mode http\n    option httplog\n    option dontlognull\n    timeout connect 10s\n    timeout client 60s\n    timeout server 60s\n    timeout http-request 60s\n\nfrontend stats\n    bind *:8404\n    stats enable\n    stats uri \/\n    stats refresh 10s\n\nfrontend http\n    bind *:8000\n    option forwardfor\n    default_backend rpc-backend\n\nbackend rpc-backend\n    balance leastconn\n    server quicknode 127.0.0.1:8001 weight 100\n    server alchemy 127.0.0.1:8002 weight 100\n\nfrontend quicknode-frontend\n    bind *:8001\n    option dontlog-normal\n    default_backend quicknode-backend\n\nbackend quicknode-backend\n    balance roundrobin\n    http-request set-header Host xxx.quiknode.pro\n    http-request set-path \/xxx\n    server quicknode xxx.quiknode.pro:443 sni str(xxx.quiknode.pro) check-ssl ssl verify none\n\nfrontend alchemy-frontend\n    bind *:8002\n    option dontlog-normal\n    default_backend alchemy-backend\n\nbackend alchemy-backend\n    balance roundrobin\n    http-request set-header Host xxx.alchemy.com\n    http-request set-path \/xxx\n    server alchemy xxx.alchemy.com:443 sni str(xxx.alchemy.com) check-ssl ssl verify none<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/docs.haproxy.org\/2.7\/configuration.html\">https:\/\/docs.haproxy.org\/2.7\/configuration.html<\/a><br \/>\n<a href=\"https:\/\/www.haproxy.com\/documentation\/hapee\/latest\/configuration\/\">https:\/\/www.haproxy.com\/documentation\/hapee\/latest\/configuration\/<\/a><\/p>\n<p>Test it locally:<\/p>\n<pre class=\"line-numbers\"><code class=\"language-bash\">docker run --rm -v $PWD:\/usr\/local\/etc\/haproxy \n-p 8000:8000 \n-p 8404:8404 \n-p 9999:9999 \n-i -t --name haproxy haproxy:2.7.0\n\ndocker exec -i -t -u 0 haproxy bash\n\necho \"show stat\" | socat stdio TCP:127.0.0.1:9999\necho \"set weight rpc-backend\/quicknode 0\" | socat stdio TCP:127.0.0.1:9999\n\n# if you're using a socket file descriptor\napt update\napt install socat -y\necho \"set weight rpc-backend\/alchemy 0\" | socat stdio \/var\/lib\/haproxy\/haproxy.sock<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/www.redhat.com\/sysadmin\/getting-started-socat\">https:\/\/www.redhat.com\/sysadmin\/getting-started-socat<\/a><\/p>\n<h2>Healthcheck<\/h2>\n<p>Then the important part: we're going to run a simple but flexible healthcheck script, called node weighter, as a sidecar container. So the healthcheck script can access HAProxy admin socket of the HAProxy container through <code>127.0.0.1:9999<\/code>.<\/p>\n<p>The node weighter can be written in any language. Here is a TypeScript example:<\/p>\n<p>in <code>HAProxyConnector.ts<\/code> which sets weights through HAProxy admin socket:<\/p>\n<pre class=\"line-numbers\"><code class=\"language-typescript\">import net from \"net\"\n\nexport interface ServerWeight {\n    backendName: string\n    serverName: string\n    weight: number\n}\n\nexport class HAProxyConnector {\n    constructor(readonly adminHost = \"127.0.0.1\", readonly adminPort = 9999) {}\n\n    setWeights(serverWeights: ServerWeight[]) {\n        const scaledServerWeights = this.scaleWeights(serverWeights)\n\n        const commands = scaledServerWeights.map(server =&gt; {\n            return <code>set weight ${server.backendName}\/${server.serverName} ${server.weight}n<\/code>\n        })\n\n        const client = net.createConnection({ host: this.adminHost, port: this.adminPort }, () =&gt; {\n            console.log(\"HAProxyAdminSocketConnected\")\n        })\n        client.on(\"error\", err =&gt; {\n            console.log(\"HAProxyAdminSocketError\")\n        })\n        client.on(\"data\", data =&gt; {\n            console.log(\"HAProxyAdminSocketData\")\n            console.log(data.toString().trim())\n        })\n\n        client.write(commands.join(\"\"))\n    }\n\n    private scaleWeights(serverWeights: ServerWeight[]) {\n        const totalWeight = sum(serverWeights.map(server =&gt; server.weight))\n\n        return serverWeights.map(server =&gt; {\n            server.weight = Math.floor((server.weight \/ totalWeight) * 256)\n            return server\n        })\n    }\n}<\/code><\/pre>\n<p>in <code>RPCProxyWeighter.ts<\/code> which calculates weights based on a custom healthcheck logic:<\/p>\n<pre class=\"line-numbers\"><code class=\"language-typescript\">import { HAProxyConnector } from \".\/connectors\/HAProxyConnector\"\nimport config from \".\/config.json\"\n\nexport interface Server {\n    backendName: string\n    serverName: string\n    serverUrl: string\n}\n\nexport interface ServerWithWeight {\n    backendName: string\n    serverName: string\n    weight: number\n    [metadata: string]: any\n}\n\nexport class RPCProxyWeighter {\n    protected readonly log = Log.getLogger(RPCProxyWeighter.name)\n    protected readonly connector: HAProxyConnector\n\n    protected readonly ADJUST_INTERVAL_SEC = 60 \/\/ 60 seconds\n    protected readonly MAX_BLOCK_TIMESTAMP_DELAY_MSEC = 150 * 1000 \/\/ 150 seconds\n    protected readonly MAX_LATENCY_MSEC = 3 * 1000 \/\/ 3 seconds\n    protected shouldScale = false\n    protected totalWeight = 0\n\n    constructor() {\n        this.connector = new HAProxyConnector(config.admin.host, config.admin.port)\n    }\n\n    async start() {\n        while (true) {\n            let serverWithWeights = await this.calculateWeights(config.servers)\n            if (this.shouldScale) {\n                serverWithWeights = this.connector.scaleWeights(serverWithWeights)\n            }\n            this.connector.setWeights(serverWithWeights)\n\n            await sleep(1000 * this.ADJUST_INTERVAL_SEC)\n        }\n    }\n\n    async calculateWeights(servers: Server[]) {\n        this.totalWeight = 0\n\n        const serverWithWeights = await Promise.all(\n            servers.map(async server =&gt; {\n                try {\n                    return await this.calculateWeight(server)\n                } catch (err: any) {\n                    return {\n                        backendName: server.backendName,\n                        serverName: server.serverName,\n                        weight: 0,\n                    }\n                }\n            }),\n        )\n\n        \/\/ if all endpoints are unhealthy, overwrite weights to 100\n        if (this.totalWeight === 0) {\n            for (const server of serverWithWeights) {\n                server.weight = 100\n            }\n        }\n\n        return serverWithWeights\n    }\n\n    async calculateWeight(server: Server) {\n        const healthInfo = await this.getHealthInfo(server.serverUrl)\n\n        const serverWithWeight: ServerWithWeight = {\n            ...{\n                backendName: server.backendName,\n                serverName: server.serverName,\n                weight: 0,\n            },\n            ...healthInfo,\n        }\n\n        if (healthInfo.isBlockTooOld || healthInfo.isLatencyTooHigh) {\n            return serverWithWeight\n        }\n\n        \/\/ normalizedLatency: the lower the better\n        \/\/ blockTimestampDelayMsec: the lower the better\n        \/\/ both units are milliseconds at the same scale\n        \/\/ serverWithWeight.weight = 1 \/ healthInfo.normalizedLatency + 1 \/ healthInfo.blockTimestampDelayMsec\n\n        \/\/ NOTE: if we're using <code>balance source<\/code> in HAProxy, the weight can only be 100% or 0%,\n        \/\/ therefore, as long as the RPC endpoint is healthy, we always set the same weight\n        serverWithWeight.weight = 100\n\n        this.totalWeight += serverWithWeight.weight\n\n        return serverWithWeight\n    }\n\n    protected async getHealthInfo(serverUrl: string): Promise&lt;HealthInfo&gt; {\n        const provider = new ethers.providers.StaticJsonRpcProvider(serverUrl)\n\n        \/\/ TODO: add timeout\n        const start = Date.now()\n        const blockNumber = await provider.getBlockNumber()\n        const end = Date.now()\n\n        const block = await provider.getBlock(blockNumber)\n\n        const blockTimestamp = block.timestamp\n        const blockTimestampDelayMsec = Math.floor(Date.now() \/ 1000 - blockTimestamp) * 1000\n        const isBlockTooOld = blockTimestampDelayMsec &gt;= this.MAX_BLOCK_TIMESTAMP_DELAY_MSEC\n\n        const latency = end - start\n        const normalizedLatency = this.normalizeLatency(latency)\n        const isLatencyTooHigh = latency &gt;= this.MAX_LATENCY_MSEC\n\n        return {\n            blockNumber,\n            blockTimestamp,\n            blockTimestampDelayMsec,\n            isBlockTooOld,\n            latency,\n            normalizedLatency,\n            isLatencyTooHigh,\n        }\n    }\n\n    protected normalizeLatency(latency: number) {\n        if (latency &lt;= 40) {\n            return 1\n        }\n\n        const digits = Math.floor(latency).toString().length\n        const base = Math.pow(10, digits - 1)\n        return Math.floor(latency \/ base) * base\n    }\n}<\/code><\/pre>\n<p>in <code>config.json<\/code>:<\/p>\n<p>Technically, we don't need this config file. Instead, we could read the actual URLs from HAProxy admin socket directly. Though creating a JSON file that contains URLs is much simpler.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-json\">{\n    \"admin\": {\n        \"host\": \"127.0.0.1\",\n        \"port\": 9999\n    },\n    \"servers\": [\n        {\n            \"backendName\": \"rpc-backend\",\n            \"serverName\": \"quicknode\",\n            \"serverUrl\": \"https:\/\/xxx.quiknode.pro\/xxx\"\n        },\n        {\n            \"backendName\": \"rpc-backend\",\n            \"serverName\": \"alchemy\",\n            \"serverUrl\": \"https:\/\/xxx.alchemy.com\/xxx\"\n        }\n    ]\n}<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/www.haproxy.com\/documentation\/hapee\/latest\/api\/runtime-api\/set-weight\/\">https:\/\/www.haproxy.com\/documentation\/hapee\/latest\/api\/runtime-api\/set-weight\/<\/a><br \/>\n<a href=\"https:\/\/sleeplessbeastie.eu\/2020\/01\/29\/how-to-use-haproxy-stats-socket\/\">https:\/\/sleeplessbeastie.eu\/2020\/01\/29\/how-to-use-haproxy-stats-socket\/<\/a><\/p>\n<h2>Deployments<\/h2>\n<pre class=\"line-numbers\"><code class=\"language-yaml\">apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: rpc-proxy-config-file\ndata:\n  haproxy.cfg: |\n    ...\n  config.json: |\n    ...\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: rpc-proxy\nspec:\n  replicas: 2\n  selector:\n    matchLabels:\n      app: rpc-proxy\n  template:\n    metadata:\n      labels:\n        app: rpc-proxy\n    spec:\n      volumes:\n        - name: rpc-proxy-config-file\n          configMap:\n            name: rpc-proxy-config-file\n      containers:\n        - name: haproxy\n          image: haproxy:2.7.0\n          ports:\n            - containerPort: 8000\n              protocol: TCP\n          resources:\n            requests:\n              cpu: 200m\n              memory: 256Mi\n            limits:\n              cpu: 1000m\n              memory: 256Mi\n          volumeMounts:\n            - name: rpc-proxy-config-file\n              subPath: haproxy.cfg\n              mountPath: \/usr\/local\/etc\/haproxy\/haproxy.cfg\n              readOnly: true\n        - name: node-weighter\n          image: your-node-weighter\n          command: [\"node\", \".\/index.js\"]\n          resources:\n            requests:\n              cpu: 200m\n              memory: 256Mi\n            limits:\n              cpu: 1000m\n              memory: 256Mi\n          volumeMounts:\n            - name: rpc-proxy-config-file\n              subPath: config.json\n              mountPath: \/path\/to\/build\/config.json\n              readOnly: true\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: rpc-proxy\nspec:\n  clusterIP: None\n  selector:\n    app: rpc-proxy\n  ports:\n    - name: http\n      port: 8000\n      targetPort: 8000<\/code><\/pre>\n<p>The RPC load balancer can then be accessed through <code>http:\/\/rpc-proxy.default.svc.cluster.local:8000<\/code> inside the Kubernetes cluster.<\/p>\n<p>ref:<br \/>\n<a href=\"https:\/\/www.containiq.com\/post\/kubernetes-sidecar-container\">https:\/\/www.containiq.com\/post\/kubernetes-sidecar-container<\/a><br \/>\n<a href=\"https:\/\/hub.docker.com\/_\/haproxy\">https:\/\/hub.docker.com\/_\/haproxy<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>To achieve high availability and better performance, we could build a HAProxy load balancer in front of multiple Ethereum RPC providers, and also automatically adjust traffic weights based on the latency and block timestamp of each RPC endpoint. ref: https:\/\/www.haproxy.org\/ Configurations In haproxy.cfg, we have a backend named rpc-backend, and two RPC endpoints: quicknode and&hellip; <a href=\"https:\/\/vinta.ws\/code\/deploy-ethereum-rpc-provider-load-balancer-with-haproxy-in-kubernetes-aws-eks.html\" class=\"more-link\">Read More<\/a><\/p>\n","protected":false},"author":1,"featured_media":849,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[137,38],"tags":[16,136,138,142,123,140],"class_list":["post-848","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-blockchain","category-about-devops","tag-amazon-web-services","tag-aws-eks","tag-ethereum","tag-haproxy","tag-kubernetes","tag-typescript"],"_links":{"self":[{"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/posts\/848","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/comments?post=848"}],"version-history":[{"count":0,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/posts\/848\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/media\/849"}],"wp:attachment":[{"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/media?parent=848"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/categories?post=848"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/tags?post=848"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}