{"id":842,"date":"2022-12-29T16:22:18","date_gmt":"2022-12-29T08:22:18","guid":{"rendered":"https:\/\/vinta.ws\/code\/?p=842"},"modified":"2026-03-17T00:41:49","modified_gmt":"2026-03-16T16:41:49","slug":"deploy-graph-node-the-graph-in-kubernetes-aws-eks","status":"publish","type":"post","link":"https:\/\/vinta.ws\/code\/deploy-graph-node-the-graph-in-kubernetes-aws-eks.html","title":{"rendered":"Deploy graph-node (The Graph) in Kubernetes (AWS EKS)"},"content":{"rendered":"<p><code>graph-node<\/code> is an open source software that indexes blockchain data, also known as an indexer. Though the cost of running a self-hosted graph node could be pretty high. We're going to deploy a self-hosted graph node on Amazon Elastic Kubernetes Service (EKS).<\/p>\n<p>In this article, we have two approaches to deploy self-hosted graph node:<\/p>\n<ol>\n<li>A single graph node with a single PostgreSQL database<\/li>\n<li>A graph node cluster with a primary-secondary PostgreSQL cluster\n<ul>\n<li>A graph node cluster consists of one index node and multiple query nodes<\/li>\n<li>For the database, we will simply use a Multi-AZ DB cluster on AWS RDS<\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<p>ref:<br \/>\n<a href=\"https:\/\/github.com\/graphprotocol\/graph-node\">https:\/\/github.com\/graphprotocol\/graph-node<\/a><\/p>\n<h2>Create a PostgreSQL Database on AWS RDS<\/h2>\n<p>Hardware requirements for running a graph-node:<br \/>\n<a href=\"https:\/\/thegraph.com\/docs\/en\/network\/indexing\/#what-are-the-hardware-requirements\">https:\/\/thegraph.com\/docs\/en\/network\/indexing\/#what-are-the-hardware-requirements<\/a><br \/>\n<a href=\"https:\/\/docs.thegraph.academy\/official-docs\/indexer\/testnet\/graph-protocol-testnet-baremetal\/1_architectureconsiderations\">https:\/\/docs.thegraph.academy\/official-docs\/indexer\/testnet\/graph-protocol-testnet-baremetal\/1_architectureconsiderations<\/a><\/p>\n<h3>A Single DB Instance<\/h3>\n<p>We use the following settings on staging.<\/p>\n<ul>\n<li>Version: <code>PostgreSQL 13.7-R1<\/code><\/li>\n<li>Template: <code>Dev\/Test<\/code><\/li>\n<li>Deployment option: <code>Single DB instance<\/code><\/li>\n<li>DB instance identifier:\u00a0<code>graph-node<\/code><\/li>\n<li>Master username: <code>graph_node<\/code><\/li>\n<li>Auto generate a password: <code>Yes<\/code><\/li>\n<li>DB instance class: <code>db.t3.medium<\/code> (2 vCPU 4G RAM)<\/li>\n<li>Storage type: <code>gp2<\/code><\/li>\n<li>Allocated storage: <code>500 GB<\/code><\/li>\n<li>Enable storage autoscaling: <code>No<\/code><\/li>\n<li>Compute resource:\u00a0<code>Don\u2019t connect to an EC2 compute resource<\/code><\/li>\n<li>Network type: <code>IPv4<\/code><\/li>\n<li>VPC:\u00a0<code>eksctl-perp-staging-cluster\/VPC<\/code><\/li>\n<li>DB Subnet group: <code>default-vpc<\/code><\/li>\n<li>Public access: <code>Yes<\/code><\/li>\n<li>VPC security group: <code>graph-node<\/code><\/li>\n<li>Availability Zone: <code>ap-northeast-1d<\/code><\/li>\n<li>Initial database name:\u00a0<code>graph_node<\/code><\/li>\n<\/ul>\n<h3>A\u00a0Multi-AZ DB Cluster<\/h3>\n<p>We use the following settings on production.<\/p>\n<ul>\n<li>Version: <code>PostgreSQL 13.7-R1<\/code><\/li>\n<li>Template: <code>Production<\/code><\/li>\n<li>Deployment option:\u00a0<code>Multi-AZ DB Cluster<\/code><\/li>\n<li>DB instance identifier:\u00a0<code>graph-node-cluster<\/code><\/li>\n<li>Master username: <code>graph_node<\/code><\/li>\n<li>Auto generate a password: <code>Yes<\/code><\/li>\n<li>DB instance class: <code>db.m6gd.2xlarge<\/code> (8 vCPU 32G RAM)<\/li>\n<li>Storage type: <code>io1<\/code><\/li>\n<li>Allocated storage: <code>500 GB<\/code><\/li>\n<li>Provisioned IOPS: <code>1000<\/code><\/li>\n<li>Enable storage autoscaling: <code>No<\/code><\/li>\n<li>Compute resource:\u00a0<code>Don\u2019t connect to an EC2 compute resource<\/code><\/li>\n<li>VPC:\u00a0<code>eksctl-perp-production-cluster\/VPC<\/code><\/li>\n<li>DB Subnet group: <code>default-vpc<\/code><\/li>\n<li>Public access: <code>Yes<\/code><\/li>\n<li>VPC security group: <code>graph-node<\/code><\/li>\n<\/ul>\n<p>Unfortunately, AWS currently does not have Reserved Instances (RIs) Plan for Multi-AZ DB clusters. Use &quot;Multi-AZ DB instance&quot; or &quot;Single DB instance&quot; instead if the cost is a big concern to you.<\/p>\n<h3>RDS Remote Access<\/h3>\n<p>You could test your database remote access.\u00a0Also, make sure the security group's inbound rules include <code>5432<\/code> port for PostgreSQL.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-bash\">brew install postgresql\n\npsql --host=YOUR_RDB_ENDPOINT --port=5432 --username=graph_node --password --dbname=postgres\n# or\ncreatedb -h YOUR_RDB_ENDPOINT\u00a0-p 5432 -U graph_node graph_node<\/code><\/pre>\n<h2>Create a Dedicated EKS Node Group<\/h2>\n<p>This step is optional.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-bash\">eksctl --profile=perp create nodegroup \n--cluster perp-production \n--region ap-southeast-1 \n--name \"managed-graph-node-m5-xlarge\" \n--node-type \"m5.xlarge\" \n--nodes 3 \n--nodes-min 3 \n--nodes-max 3 \n--managed \n--asg-access \n--alb-ingress-access<\/code><\/pre>\n<h2>Deploy graph-node in Kubernetes<\/h2>\n<h3>Deployments for a Single DB Instance<\/h3>\n<pre class=\"line-numbers\"><code class=\"language-yaml\">apiVersion: v1\nkind: Service\nmetadata:\n  name: graph-node\nspec:\n  clusterIP: None\n  selector:\n    app: graph-node\n  ports:\n    - name: jsonrpc\n      port: 8000\n      targetPort: 8000\n    - name: websocket\n      port: 8001\n      targetPort: 8001\n    - name: admin\n      port: 8020\n      targetPort: 8020\n    - name: index-node\n      port: 8030\n      targetPort: 8030\n    - name: metrics\n      port: 8040\n      targetPort: 8040\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: graph-node-config\ndata:\n  # https:\/\/github.com\/graphprotocol\/graph-node\/blob\/master\/docs\/environment-variables.md\n  GRAPH_LOG: info\n  EXPERIMENTAL_SUBGRAPH_VERSION_SWITCHING_MODE: synced\n---\napiVersion: v1\nkind: Secret\nmetadata:\n  name: graph-node-secret\ntype: Opaque\ndata:\n  postgres_host: xxx\n  postgres_user: xxx\n  postgres_db: xxx\n  postgres_pass: xxx\n  ipfs: https:\/\/ipfs.network.thegraph.com\n  ethereum: optimism:https:\/\/YOUR_RPC_ENDPOINT_1 optimism:https:\/\/YOUR_RPC_ENDPOINT_2\n---\napiVersion: apps\/v1\nkind: StatefulSet\nmetadata:\n  name: graph-node\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: graph-node\n  serviceName: graph-node\n  template:\n    metadata:\n      labels:\n        app: graph-node\n      annotations:\n        \"cluster-autoscaler.kubernetes.io\/safe-to-evict\": \"false\"\n    spec:\n      containers:\n        # https:\/\/github.com\/graphprotocol\/graph-node\/releases\n        # https:\/\/hub.docker.com\/r\/graphprotocol\/graph-node\n        - name: graph-node\n          image: graphprotocol\/graph-node:v0.29.0\n          envFrom:\n            - secretRef:\n                name: graph-node-secret\n            - configMapRef:\n                name: graph-node-config\n          ports:\n            - containerPort: 8000\n              protocol: TCP\n            - containerPort: 8001\n              protocol: TCP\n            - containerPort: 8020\n              protocol: TCP\n            - containerPort: 8030\n              protocol: TCP\n            - containerPort: 8040\n              protocol: TCP\n          resources:\n            requests:\n              cpu: 2000m\n              memory: 4G\n            limits:\n              cpu: 4000m\n              memory: 4G<\/code><\/pre>\n<p>Since Kubernetes Secrets are, by default, stored <strong>unencrypted<\/strong> in the API server's underlying data store (etcd). Anyone with API access can retrieve or modify a Secret. They're not secret at all. So instead of storing sensitive data in Secrets, you might want to use <a href=\"https:\/\/secrets-store-csi-driver.sigs.k8s.io\/introduction.html\">Secrets Store CSI Driver<\/a>.<\/p>\n<p>ref:<br \/>\n<a href=\"https:\/\/github.com\/graphprotocol\/graph-node\/blob\/master\/docs\/environment-variables.md\">https:\/\/github.com\/graphprotocol\/graph-node\/blob\/master\/docs\/environment-variables.md<\/a><br \/>\n<a href=\"https:\/\/hub.docker.com\/r\/graphprotocol\/graph-node\">https:\/\/hub.docker.com\/r\/graphprotocol\/graph-node<\/a><\/p>\n<h3>Deployments\u00a0for a Multi-AZ DB Cluster<\/h3>\n<p>There are two types of nodes in a graph node cluster:<\/p>\n<ul>\n<li>Index Node: Only indexing data from the blockchain, not serving queries at all<\/li>\n<li>Query Node: Only serving GraphQL queries, not indexing data at all<\/li>\n<\/ul>\n<p>Indexing subgraphs doesn't require too much CPU and memory resources, but serving queries does, especially when you enable GraphQL caching.<\/p>\n<h4>Index Node<\/h4>\n<p>Technically, we could further split an index node into <strong>Ingestor<\/strong> and <strong>Indexer<\/strong>: the former fetches blockchain data from RPC providers periodically, and the latter indexes entities based on mappings. That's another story though.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-yaml\">apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: graph-node-cluster-index-config-file\ndata:\n  config.toml: |\n    [store]\n    [store.primary]\n    connection = \"postgresql:\/\/USER:PASSWORD@RDS_WRITER_ENDPOINT\/DB\"\n    weight = 1\n    pool_size = 2\n\n    [chains]\n    ingestor = \"graph-node-cluster-index-0\"\n    [chains.optimism]\n    shard = \"primary\"\n    provider = [\n      { label = \"optimism-rpc-proxy\", url = \"http:\/\/rpc-proxy-for-graph-node.default.svc.cluster.local:8000\", features = [\"archive\"] }\n      # { label = \"optimism-quicknode\", url = \"https:\/\/YOUR_RPC_ENDPOINT_1\", features = [\"archive\"] },\n      # { label = \"optimism-alchemy\", url = \"https:\/\/YOUR_RPC_ENDPOINT_2\", features = [\"archive\"] },\n      # { label = \"optimism-infura\", url = \"https:\/\/YOUR_RPC_ENDPOINT_3\", features = [\"archive\"] }\n    ]\n\n    [deployment]\n    [[deployment.rule]]\n    indexers = [ \"graph-node-cluster-index-0\" ]\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: graph-node-cluster-index-config-env\ndata:\n  # https:\/\/github.com\/graphprotocol\/graph-node\/blob\/master\/docs\/environment-variables.md\n  GRAPH_LOG: \"info\"\n  GRAPH_KILL_IF_UNRESPONSIVE: \"true\"\n  ETHEREUM_POLLING_INTERVAL: \"100\"\n  ETHEREUM_BLOCK_BATCH_SIZE: \"10\"\n  GRAPH_STORE_WRITE_QUEUE: \"50\"\n  EXPERIMENTAL_SUBGRAPH_VERSION_SWITCHING_MODE: \"synced\"\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: graph-node-cluster-index\nspec:\n  clusterIP: None\n  selector:\n    app: graph-node-cluster-index\n  ports:\n    - name: jsonrpc\n      port: 8000\n      targetPort: 8000\n    - name: websocket\n      port: 8001\n      targetPort: 8001\n    - name: admin\n      port: 8020\n      targetPort: 8020\n    - name: index-node\n      port: 8030\n      targetPort: 8030\n    - name: metrics\n      port: 8040\n      targetPort: 8040\n---\napiVersion: apps\/v1\nkind: StatefulSet\nmetadata:\n  name: graph-node-cluster-index\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: graph-node-cluster-index\n  serviceName: graph-node-cluster-index\n  template:\n    metadata:\n      labels:\n        app: graph-node-cluster-index\n      annotations:\n        \"cluster-autoscaler.kubernetes.io\/safe-to-evict\": \"false\"\n    spec:\n      terminationGracePeriodSeconds: 10\n      affinity:\n        nodeAffinity:\n          requiredDuringSchedulingIgnoredDuringExecution:\n            nodeSelectorTerms:\n              - matchExpressions:\n                  - key: alpha.eksctl.io\/nodegroup-name\n                    operator: In\n                    values:\n                      - managed-graph-node-m5-xlarge\n          preferredDuringSchedulingIgnoredDuringExecution:\n            - weight: 100\n              preference:\n                matchExpressions:\n                  - key: topology.kubernetes.io\/zone\n                    operator: In\n                    values:\n                      # should schedule index node to the zone that writer instance is located\n                      - ap-southeast-1a\n      volumes:\n        - name: graph-node-cluster-index-config-file\n          configMap:\n            name: graph-node-cluster-index-config-file\n      containers:\n        # https:\/\/github.com\/graphprotocol\/graph-node\/releases\n        # https:\/\/hub.docker.com\/r\/graphprotocol\/graph-node\n        - name: graph-node-cluster-index\n          image: graphprotocol\/graph-node:v0.29.0\n          command:\n            [\n              \"\/bin\/sh\",\n              \"-c\",\n              'graph-node --node-id $HOSTNAME --config \"\/config.toml\" --ipfs \"https:\/\/ipfs.network.thegraph.com\"',\n            ]\n          envFrom:\n            - configMapRef:\n                name: graph-node-cluster-index-config-env\n          ports:\n            - containerPort: 8000\n              protocol: TCP\n            - containerPort: 8001\n              protocol: TCP\n            - containerPort: 8020\n              protocol: TCP\n            - containerPort: 8030\n              protocol: TCP\n            - containerPort: 8040\n              protocol: TCP\n          resources:\n            requests:\n              cpu: 1000m\n              memory: 1G\n            limits:\n              cpu: 2000m\n              memory: 1G\n          volumeMounts:\n            - name: graph-node-cluster-index-config-file\n              subPath: config.toml\n              mountPath: \/config.toml\n              readOnly: true<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/github.com\/graphprotocol\/graph-node\/blob\/master\/docs\/config.md\">https:\/\/github.com\/graphprotocol\/graph-node\/blob\/master\/docs\/config.md<\/a><br \/>\n<a href=\"https:\/\/github.com\/graphprotocol\/graph-node\/blob\/master\/docs\/environment-variables.md\">https:\/\/github.com\/graphprotocol\/graph-node\/blob\/master\/docs\/environment-variables.md<\/a><\/p>\n<p>The key factors to the efficiency of syncing\/indexing subgraphs are:<\/p>\n<ol>\n<li>The latency of the RPC provider<\/li>\n<li>The write throughput of the database<\/li>\n<\/ol>\n<p>I didn't find any <code>graph-node<\/code> configs or environment variables that can speed up the syncing process noticeably. If you know, please tell me.<\/p>\n<p>ref:<br \/>\n<a href=\"https:\/\/github.com\/graphprotocol\/graph-node\/issues\/3756\">https:\/\/github.com\/graphprotocol\/graph-node\/issues\/3756<\/a><\/p>\n<p>If you're interested in building a RPC proxy with healthcheck of block number and latency, see <a href=\"https:\/\/vinta.ws\/code\/deploy-ethereum-rpc-provider-load-balancer-with-haproxy-in-kubernetes-aws-eks.html\">Deploy Ethereum RPC Provider Load Balancer with HAProxy in Kubernetes (AWS EKS)<\/a>. <code>graph-node<\/code> itself cannot detect if an RPC provider's block is delayed.<\/p>\n<h4>Query Nodes<\/h4>\n<p>The most important config is <code>DISABLE_BLOCK_INGESTOR: &quot;true&quot;<\/code> which basically configures the node as a query node.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-yaml\">apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: graph-node-cluster-query-config-env\ndata:\n  # https:\/\/github.com\/graphprotocol\/graph-node\/blob\/master\/docs\/environment-variables.md\n  DISABLE_BLOCK_INGESTOR: \"true\" # this node won't ingest blockchain data\n  GRAPH_LOG_QUERY_TIMING: \"gql\"\n  GRAPH_GRAPHQL_QUERY_TIMEOUT: \"600\"\n  GRAPH_QUERY_CACHE_BLOCKS: \"60\"\n  GRAPH_QUERY_CACHE_MAX_MEM: \"2000\" # the actual used memory could be 3x\n  GRAPH_QUERY_CACHE_STALE_PERIOD: \"100\"\n  EXPERIMENTAL_SUBGRAPH_VERSION_SWITCHING_MODE: \"synced\"\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: graph-node-cluster-query-config-file\ndata:\n  config.toml: |\n    [store]\n    [store.primary]\n    connection = \"postgresql:\/\/USER:PASSWORD@RDS_WRITER_ENDPOINT\/DB\" # connect to RDS writer instance\n    weight = 0\n    pool_size = 2\n    [store.primary.replicas.repl1]\n    connection = \"postgresql:\/\/USER:PASSWORD@RDS_READER_ENDPOINT\/DB\" # connect to RDS reader instances (multiple)\n    weight = 1\n    pool_size = 100\n\n    [chains]\n    ingestor = \"graph-node-cluster-index-0\"\n\n    [deployment]\n    [[deployment.rule]]\n    indexers = [ \"graph-node-cluster-index-0\" ]\n\n    [general]\n    query = \"graph-node-cluster-query*\"\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: graph-node-cluster-query\nspec:\n  replicas: 3\n  strategy:\n    type: RollingUpdate\n    rollingUpdate:\n      maxSurge: 1\n      maxUnavailable: 0\n  minReadySeconds: 5\n  selector:\n    matchLabels:\n      app: graph-node-cluster-query\n  template:\n    metadata:\n      labels:\n        app: graph-node-cluster-query\n    spec:\n      terminationGracePeriodSeconds: 40\n      affinity:\n        nodeAffinity:\n          requiredDuringSchedulingIgnoredDuringExecution:\n            nodeSelectorTerms:\n              - matchExpressions:\n                  - key: alpha.eksctl.io\/nodegroup-name\n                    operator: In\n                    values:\n                      - managed-graph-node-m5-xlarge\n      volumes:\n        - name: graph-node-cluster-query-config-file\n          configMap:\n            name: graph-node-cluster-query-config-file\n      containers:\n        # https:\/\/github.com\/graphprotocol\/graph-node\/releases\n        # https:\/\/hub.docker.com\/r\/graphprotocol\/graph-node\n        - name: graph-node-cluster-query\n          image: graphprotocol\/graph-node:v0.29.0\n          command:\n            [\n              \"\/bin\/sh\",\n              \"-c\",\n              'graph-node --node-id $HOSTNAME --config \"\/config.toml\" --ipfs \"https:\/\/ipfs.network.thegraph.com\"',\n            ]\n          envFrom:\n            - configMapRef:\n                name: graph-node-cluster-query-config-env\n          ports:\n            - containerPort: 8000\n              protocol: TCP\n            - containerPort: 8001\n              protocol: TCP\n            - containerPort: 8030\n              protocol: TCP\n          resources:\n            requests:\n              cpu: 1000m\n              memory: 8G\n            limits:\n              cpu: 2000m\n              memory: 8G\n          volumeMounts:\n            - name: graph-node-cluster-query-config-file\n              subPath: config.toml\n              mountPath: \/config.toml\n              readOnly: true<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/github.com\/graphprotocol\/graph-node\/blob\/master\/docs\/config.md\">https:\/\/github.com\/graphprotocol\/graph-node\/blob\/master\/docs\/config.md<\/a><br \/>\n<a href=\"https:\/\/github.com\/graphprotocol\/graph-node\/blob\/master\/docs\/environment-variables.md\">https:\/\/github.com\/graphprotocol\/graph-node\/blob\/master\/docs\/environment-variables.md<\/a><\/p>\n<p>It's also strongly recommended to mark subgraph schemas as immutable with <code>@entity(immutable: true)<\/code>. Immutable entities are much faster to write and to query, so should be used whenever possible. The query time reduces by 80% in our case.<\/p>\n<p>ref:<br \/>\n<a href=\"https:\/\/thegraph.com\/docs\/en\/developing\/creating-a-subgraph\/#defining-entities\">https:\/\/thegraph.com\/docs\/en\/developing\/creating-a-subgraph\/#defining-entities<\/a><\/p>\n<h2>Setup an Ingress for graph-node<\/h2>\n<p>WebSocket connections are inherently sticky. If the client requests a connection upgrade to WebSockets, the target that returns an HTTP 101 status code to accept the connection upgrade is the target used in the WebSockets connection. After the WebSockets upgrade is complete, cookie-based stickiness is not used. You don't need to enable stickiness for ALB.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-yaml\">apiVersion: networking.k8s.io\/v1\nkind: Ingress\nmetadata:\n  name: graph-node-ingress\n  annotations:\n    kubernetes.io\/ingress.class: alb\n    alb.ingress.kubernetes.io\/scheme: internet-facing\n    alb.ingress.kubernetes.io\/ssl-policy: ELBSecurityPolicy-FS-1-2-Res-2020-10\n    alb.ingress.kubernetes.io\/certificate-arn: arn:aws:acm:XXX\n    alb.ingress.kubernetes.io\/target-type: ip\n    alb.ingress.kubernetes.io\/target-group-attributes: stickiness.enabled=false\n    alb.ingress.kubernetes.io\/load-balancer-attributes: idle_timeout.timeout_seconds=3600,deletion_protection.enabled=true\n    alb.ingress.kubernetes.io\/listen-ports: '[{\"HTTPS\": 443}]'\n    alb.ingress.kubernetes.io\/actions.subgraph-api: &gt;\n      {\"type\": \"forward\", \"forwardConfig\": {\"targetGroups\": [\n        {\"serviceName\": \"graph-node-cluster-query\", \"servicePort\": 8000, \"weight\": 100}\n      ]}}\n    alb.ingress.kubernetes.io\/actions.subgraph-ws: &gt;\n      {\"type\": \"forward\", \"forwardConfig\": {\"targetGroups\": [\n        {\"serviceName\": \"graph-node-cluster-query\", \"servicePort\": 8001, \"weight\": 100}\n      ]}}\n    alb.ingress.kubernetes.io\/actions.subgraph-hc: &gt;\n      {\"type\": \"forward\", \"forwardConfig\": {\"targetGroups\": [\n        {\"serviceName\": \"graph-node-cluster-query\", \"servicePort\": 8030, \"weight\": 100}\n      ]}}\nspec:\n  rules:\n    - host: \"subgraph-api.example.com\"\n      http:\n        paths:\n          - path: \/\n            pathType: Prefix\n            backend:\n              service:\n                name: subgraph-api\n                port:\n                  name: use-annotation\n    - host: \"subgraph-ws.example.com\"\n      http:\n        paths:\n          - path: \/\n            pathType: Prefix\n            backend:\n              service:\n                name: subgraph-ws\n                port:\n                  name: use-annotation\n    - host: \"subgraph-hc.example.com\"\n      http:\n        paths:\n          - path: \/\n            pathType: Prefix\n            backend:\n              service:\n                name: subgraph-hc\n                port:\n                  name: use-annotation<\/code><\/pre>\n<h2>Deploy a Subgraph<\/h2>\n<pre class=\"line-numbers\"><code class=\"language-bash\">kubectl port-forward service\/graph-node-cluster-index 8020:8020\nnpx graph create your_org\/your_subgraph --node http:\/\/127.0.0.1:8020\ngraph deploy --node http:\/\/127.0.0.1:8020 your_org\/your_subgraph<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/github.com\/graphprotocol\/graph-cli\">https:\/\/github.com\/graphprotocol\/graph-cli<\/a><\/p>\n<p>Here are also some useful commands for maintenance tasks:<\/p>\n<pre class=\"line-numbers\"><code class=\"language-bash\">kubectl logs -l app=graph-node-cluster-query -f | grep \"query_time_ms\"\n\n# show total pool connections according to config file\ngraphman --config config.toml config pools\u00a0graph-node-cluster-query-zone-a-78b467665d-tqw7g\n\n# show info\ngraphman --config config.toml info QmbCm38nL6C7v3FS5snzjnQyDMvV8FkM72C3o5RCfGpM9P\n\n# reassign a deployment to a index node\ngraphman --config config.toml reassign QmbCm38nL6C7v3FS5snzjnQyDMvV8FkM72C3o5RCfGpM9P graph-node-cluster-index-pending-0\ngraphman --config config.toml reassign QmbCm38nL6C7v3FS5snzjnQyDMvV8FkM72C3o5RCfGpM9P graph-node-cluster-index-0\n\n# stop indexing a deployment\ngraphman --config config.toml unassign QmbCm38nL6C7v3FS5snzjnQyDMvV8FkM72C3o5RCfGpM9P\n\n# remove unused deployments\ngraphman --config config.toml unused record\ngraphman --config config.toml unused remove -d QmdoXB4zJVD5n38omVezMaAGEwsubhKdivgUmpBCUxXKDh<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/github.com\/graphprotocol\/graph-node\/blob\/master\/docs\/graphman.md\">https:\/\/github.com\/graphprotocol\/graph-node\/blob\/master\/docs\/graphman.md<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>graph-node is an open source software that indexes blockchain data, also known as an indexer. Though the cost of running a self-hosted graph node could be pretty high. We're going to deploy a self-hosted graph node on Amazon Elastic Kubernetes Service (EKS). In this article, we have two approaches to deploy self-hosted graph node: A&hellip; <a href=\"https:\/\/vinta.ws\/code\/deploy-graph-node-the-graph-in-kubernetes-aws-eks.html\" class=\"more-link\">Read More<\/a><\/p>\n","protected":false},"author":1,"featured_media":843,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[137,38],"tags":[16,136,141],"class_list":["post-842","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-blockchain","category-about-devops","tag-amazon-web-services","tag-aws-eks","tag-subgraph"],"_links":{"self":[{"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/posts\/842","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/comments?post=842"}],"version-history":[{"count":0,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/posts\/842\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/media\/843"}],"wp:attachment":[{"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/media?parent=842"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/categories?post=842"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/tags?post=842"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}