The Graph: Subgraph and GRT Token

The Graph: Subgraph and GRT Token

The Graph is a protocol for indexing and querying Blockchain data. Currently, The Graph has the legacy version and the decentralized version. The legacy version is a centralized and managed service hosted by The Graph, and it will eventually be shutdown in the future. The decentralized version (aka The Graph Network) consists of 4 major roles: Developers, Indexers, Curators, and Delegators.

ref:
https://thegraph.com/

Video from Finematics
https://finematics.com/the-graph-explained/

Terminologies

Define a Subgraph

A subgraph defines one or more entities of indexed data: what kind of data on the blockchain you want to index for faster querying. Once deployed, subgraphs could be queried by dApps to fetch blockchain data to power their frontend interfaces. Basically, a subgraph is like a database, and an entity (a GraphQL type) is like a table in RDBMS.

A subgraph definition consists of 3 files:

  • subgraph.yaml: a YAML file that defines the subgraph manifest and metadata.
  • schema.graphql: a GraphQL schema that defines what data (entities) are stored, and how to query it via GraphQL.
  • mappings.ts: AssemblyScript code that translates blockchain data (events, blocks, or contract calls) to GraphQL entities.

GraphQL Schema

First, we need to design our GraphQL entity schemas (the data model) which mainly depends on how you want to query the data instead of how the data are emitted from the blockchain. GraphQL schemas are defined using the GraphQL schema language.

Here're some notes about subgraph's GraphQL schema:

An example of schema.graphql:

type Market @entity {
  id: ID!
  baseToken: Bytes!
  pool: Bytes!
  feeRatio: BigInt!
  tradingFee: BigDecimal!
  tradingVolume: BigDecimal!
  blockNumberAdded: BigInt!
  timestampAdded: BigInt!
}

type Trader @entity {
  id: ID!
  realizedPnl: BigDecimal!
  fundingPayment: BigDecimal!
  tradingFee: BigDecimal!
  badDebt: BigDecimal!
  totalPnl: BigDecimal!
  positions: [Position!]! @derivedFrom(field: "traderRef")
}

type Position @entity {
  id: ID!
  trader: Bytes!
  baseToken: Bytes!
  positionSize: BigDecimal!
  openNotional: BigDecimal!
  openPrice: BigDecimal!
  realizedPnl: BigDecimal!
  tradingFee: BigDecimal!
  badDebt: BigDecimal!
  totalPnl: BigDecimal!
  traderRef: Trader!
}

ref:
https://thegraph.com/docs/developer/create-subgraph-hosted#the-graphql-schema
https://thegraph.com/docs/developer/graphql-api
https://graphql.org/learn/schema/

It's also worth noting that The Graph supports Time-travel queries. We can query the state of your entities for an arbitrary block in the past:

{
  positions(
    block: {
      number: 1234567
    },
    where: {
      trader: "0x5abfec25f74cd88437631a7731906932776356f9"
    }
  ) {
    id
    trader
    baseToken
    positionSize
    openNotional
    openPrice
    realizedPnl
    fundingPayment
    tradingFee
    badDebt
    totalPnl
    blockNumber
    timestamp
  }
}

ref:
https://thegraph.com/docs/developer/graphql-api#time-travel-queries

Subgraph Manifest

Second, we must provide a manifest to tell The Graph which contracts we would like to listen to, which contract events we want to index. Also a mapping file that instructs The Graph on how to transform blockchain data into GraphQL entities.

A template file of subgraph.yaml:

specVersion: 0.0.2
description: Test Subgraph
repository: https://github.com/vinta/my-subgraph
schema:
  file: ./schema.graphql
dataSources:
  - kind: ethereum/contract
    name: ClearingHouse
    network: {{ network }}
    source:
      abi: ClearingHouse
      address: {{ clearingHouse.address }}
      startBlock: {{ clearingHouse.startBlock }}
    mapping:
      kind: ethereum/events
      apiVersion: 0.0.4
      language: wasm/assemblyscript
      file: ./src/mappings/clearingHouse.ts
      entities:
        - Protocol
        - Market
        - Trader
        - Position
      abis:
        - name: ClearingHouse
          file: ./abis/ClearingHouse.json
      eventHandlers:
        - event: PoolAdded(indexed address,indexed uint24,indexed address)
          handler: handlePoolAdded
        - event: PositionChanged(indexed address,indexed address,int256,int256,uint256,int256,uint256)
          handler: handlePositionChanged

Since we would usually deploy our contracts to multiple chains (at least one for mainnet and one for testnet), so we could use a template engine (like mustache.js) to facilate deployment.

$ cat configs/arbitrum-rinkeby.json
{
    "network": "arbitrum-rinkeby",
    "clearingHouse": {
        "address": "0xYourContractAddress",
        "startBlock": 1234567
    }
}

# generate the subgraph manifest for different networks
$ mustache configs/arbitrum-rinkeby.json subgraph.template.yaml > subgraph.yaml
$ mustache configs/arbitrum-one.json subgraph.template.yaml > subgraph.yaml

It's worth noting that The Graph Legacy (the Hosted Service) supports most of common networks, for instance, mainnet, rinkeby, bsc, matic, arbitrum-one, and optimism. However, The Graph Network (the decentralized version) only supports Ethereum mainnet and rinkeby.

You could find the full list of supported networks on the document:
https://thegraph.com/docs/developer/create-subgraph-hosted#from-an-existing-contract

Mappings

Mappings are written in AssemblyScript and will be compiled to WebAssembly (WASM) when deploying. AssemblyScript's syntax is similar to TypeScript, but it's actually a completely different language.

For each event handler defined in subgraph.yaml under mapping.eventHandlers, we must create an exported function of the same name. What we do in a event handler is basically:

  1. Creating new entities or loading existed ones by id.
  2. Updating fields of entities from a blockchain event.
  3. Saving entities to The Graph.
    • It's not necessary to load an entity before updating it. It's fine to simply create the entity, set properties, then save. If the entity already exists, changes will be merged automatically.
export function handlePoolAdded(event: PoolAdded): void {
    // upsert Protocol
    const protocol = getOrCreateProtocol()
    protocol.publicMarketCount = protocol.publicMarketCount.plus(BI_ONE)

    // upsert Market
    const market = getOrCreateMarket(event.params.baseToken)
    market.baseToken = event.params.baseToken
    market.pool = event.params.pool
    market.feeRatio = BigInt.fromI32(event.params.feeRatio)
    market.blockNumberAdded = event.block.number
    market.timestampAdded = event.block.timestamp

    // commit changes
    protocol.save()
    market.save()
}

export function handlePositionChanged(event: Swapped): void {
    // upsert Market
    const market = getOrCreateMarket(event.params.baseToken)
    market.tradingFee = market.tradingFee.plus(swappedEvent.fee)
    market.tradingVolume = market.tradingVolume.plus(abs(swappedEvent.exchangedPositionNotional))
    ...

    // upsert Trader
    const trader = getOrCreateTrader(event.params.trader)
    trader.tradingFee = trader.tradingFee.plus(swappedEvent.fee)
    trader.realizedPnl = trader.realizedPnl.plus(swappedEvent.realizedPnl)
    ...

    // upsert Position
    const position = getOrCreatePosition(event.params.trader, event.params.baseToken)
    const side = swappedEvent.exchangedPositionSize.ge(BD_ZERO) ? Side.BUY : Side.SELL
    ...

    // commit changes
    market.save()
    trader.save()
    position.save()
}

We can also access contract states and call contract functions at the current block (even ). Though the functionality of calling contract functions is limited by @graphprotocol/graph-ts, it's not as powerful as libraries like ethers.js. And no, we cannot import ethers.js in mappings, as mappings are written in AssemblyScript. However, contract calls are quite "expensive" in terms of indexing performance. In extreme cases, some indexers might avoid syncing a very slow subgraph, or charge a premium for serving queries.

export function handlePoolAdded(event: PoolAdded): void {
    ...
    const pool = UniswapV3Pool.bind(event.params.pool)
    market.poolTickSpacing = pool.tickSpacing()
    ...
}

ref:
https://thegraph.com/docs/developer/create-subgraph-hosted#writing-mappings
https://thegraph.com/docs/developer/assemblyscript-api

In addition to event handlers, we're also able to define call handlers and block handlers. A call handler listens to a specific contract function call, and receives input and output of the call as the handler argument. On the contrary, a block handler will be called after every block or after blocks that match a predefined filter - for every block which contains a call to the contract listed in dataSources.

ref:
https://thegraph.com/docs/developer/create-subgraph-hosted#defining-a-call-handler
https://thegraph.com/docs/developer/create-subgraph-hosted#block-handlers

Here're references to how other projects organize their subgraphs:
https://github.com/Uniswap/uniswap-v3-subgraph
https://github.com/Synthetixio/synthetix-subgraph
https://github.com/mcdexio/mai3-perpetual-graph

Deploy a Subgraph

Deploy to Legacy Explorer

Before deploying your subgraph to the Legacy Explorer (the centralized and hosted version of The Graph), we need to create it on the Legacy Explorer dashboard.

Then run the following commands to deploy:

$ mustache configs/arbitrum-rinkeby.json subgraph.template.yaml > subgraph.yaml

$ graph auth --product hosted-service <YOUR_THE_GRAPH_ACCESS_TOKEN>
$ graph deploy --product hosted-service <YOUR_GITHUB_USERNAME>/<YOUR_SUBGRAPH_REPO>

ref:
https://thegraph.com/docs/developer/deploy-subgraph-hosted

Deploy to Subgraph Studio

When we deploy a subgraph to Subgraph Studio (the decentralized version of The Graph), we just push it to the Studio where we're able to test it. Versus, when we "publish" a subgraph in Subgraph Studio, we are publishing it on-chain. Unfortunately, Subgraph Studio only supports Ethereum Mainnet and Rinkeby testnet currently.

$ graph auth --studio <YOUR_SUBGRAPH_DEPLOY_KEY>
$ graph deploy --studio <YOUR_SUBGRAPH_SLUG>

ref:
https://thegraph.com/docs/developer/deploy-subgraph-studio

Token Economics

Before we talk about the token economics of GRT token, it's important to know that the following description only applies to The Graph Network, the decentralized version of The Graph. Also, the name of The Graph Network is a bit ambiguous, it is not a new network or a new blockchain, instead, it is a web service that charges HTTP/WebSocket API calls in GRT token.

To make GRT token somehow valuable, when you query data (through GraphQL APIs) from The Graph Network, you need to pay for each query in GRT. First, you have to connect your wallet and create an account on Subgraph Studio to obtain an API key, then you deposit some GRT tokens into the account's billing balance on Polygon since their billing contract is built on Polygon. At the end of each week, if you used your API keys to query data, you will receive an invoice based on the query fees you have generated during this period. This invoice will be paid using GRT available in your balance.

ref:
https://thegraph.com/docs/studio/billing

When it comes to token economics:

  • Indexers earn query fees and indexing rewards. GRT would be slashed if indexers are malicious or serve incorrect data. Though, there's no documentation about how exactly slashing works.
  • Delegators earn a portion of query fees and indexing rewards by delegating GRT to existing indexers.
  • Curators earn a portion of query fees for the subgraphs they signal on by depositing GRT into a bonding curve of a specific subgraph.

ref:
https://thegraph.com/blog/the-graph-grt-token-economics
https://thegraph.com/blog/the-graph-network-in-depth-part-1
https://thegraph.com/blog/the-graph-network-in-depth-part-2

Query Fee

The price of queries will be set by indexers and vary based on cost to index the subgraph, the demand for queries, the amount of curation signal and the market rate for blockchain queries. Though querying data from the hosted version of The Graph is free now.

The Graph has developed a Cost Model (Agora) for pricing queries, and there is also a microtransaction system (Scalar) that uses state channels to aggregate and compress transactions before being finalized on-chain.

ref:
https://github.com/graphprotocol/agora
https://thegraph.com/blog/scalar

IPFS: The (Very Slow) Distributed Permanent Web

IPFS: The (Very Slow) Distributed Permanent Web

IPFS stands for InterPlanetary File System, but you could simply consider it as a distributed, permanent, but ridiculously slow, not properly functioning version of web. You could upload any static file and static website to IPFS. And the whole swarm would probably distribute your files to the moon, that might be why IPFS is so fucking slow.

ref:
https://ipfs.io/

Installation

Install on macOS.

$ brew install ipfs

Start your IPFS node.

$ ipfs init
initializing IPFS node at /Users/vinta/.ipfs
generating 2048-bit RSA keypair... done
peer identity: QmfNy1th16zscbpxe8Q2EQdQkNFn7Y3Rp9kGZWL1EQDyw6

$ ipfs daemon

ref:
https://ipfs.io/docs/commands/#ipfs-init
https://ipfs.io/docs/commands/#ipfs-daemon

Furthermore, you might want to run your IPFS node in a Docker container.

# docker-compose.yml
version: "3"
services:
    ipfs:
        image: ipfs/go-ipfs:v0.4.15
        working_dir: /export
        ports:
            - "4001:4001" # Swarm
            - "5001:5001" # web UI
            - "8080:8080" # HTTP proxy
        volumes:
            - "~/.ipfs:/data/ipfs"
            - "~/.ipfs/export:/export"

ref:
https://hub.docker.com/r/ipfs/go-ipfs/

Usage

Show Node Info

$ ipfs id
{
    "ID": "QmfNy1th16zscbpxe8Q2EQdQkNFn7Y3Rp9kGZWL1EQDyw6",
    "PublicKey": "A_LONG_LONG_LONG_KEY,
    "Addresses": [
        "/ip4/127.0.0.1/tcp/4001/ipfs/QmfNy1th16zscbpxe8Q2EQdQkNFn7Y3Rp9kGZWL1EQDyw6",
        "/ip4/172.19.0.2/tcp/4001/ipfs/QmfNy1th16zscbpxe8Q2EQdQkNFn7Y3Rp9kGZWL1EQDyw6"
    ],
    "AgentVersion": "go-ipfs/0.4.14/5db3846",
    "ProtocolVersion": "ipfs/0.1.0"
}

ref:
https://ipfs.io/docs/getting-started/

Add Other Nodes to Your Bootstrap List

This one is from Muzeum, https://muzeum.pro/.

$ ipfs bootstrap add /ip4/52.221.121.238/tcp/4001/ipfs/QmTKYdZDkqHiY24kPynSmKbmRdk7cJxWsvvfvvvZArQ1N9

# you could also connect to a node directly
$ ipfs swarm connect /ip4/52.221.121.238/tcp/4001/ipfs/QmTKYdZDkqHiY24kPynSmKbmRdk7cJxWsvvfvvvZArQ1N9

ref:
https://ipfs.io/docs/commands/#ipfs-bootstrap
https://ipfs.io/docs/commands/#ipfs-swarm

Add Files to IPFS

Every IPFS node's default storage is 10GB, and a single node could only store data it needs, which also means each node only stores a small amount of whole data on IPFS. If there is not enough nodes, your data might be distributed to no one except your own node.

Your content is automatically pinned when you ipfs add it.

$ ipfs add -r mysite
added QmRticJ3P5fnb9GGnUj3U9XMkYvGEnv9AQfk6YmgRhivYA mysite/index.html
added QmY9cxiHqTFoWamkQVkpmmqzBrY3hCBEL2XNu3NtX74Fuu mysite/readme.md
added QmTLhFgeWLacpbiGNYmhchHGQAhfNyDZcLt5akJFFLV89V mysite

If files/folders under the folder change, the hash of the folder changes too.

$ vim mysite/index.html
$ ipfs add -r mysite
added QmQTTe3deLfeULKjPHnQTcyFuCmY5JZiwSTiPT4nSt1KVK mysite/index.html # changed
added QmS85tb3aKQNurFm51FaxtK6NyNei4ej3gDR21baDZXRoU mysite            # changed

ref:
https://ipfs.io/docs/commands/#ipfs-add

Pin Files from IPFS

Pinning means storing IPFS files on local node, and prevent them from getting garbage collected. Also, you could access them much quickly. You only need to do ipfs pin add to pin contents someone else uploaded.

$ ipfs pin add -r --progress /ipns/ipfs.soundscape.net/

$ ipfs pin add --progress /ipns/ipfs.soundscape.net/music_group/index.json
pinned QmZwTEhdjT4MyvEnWndVEJzBjp8zGGZH1cEBpshBQs75rY recursively

$ ipfs pin add --progress /ipns/ipfs.soundscape.net/music_album/index.json
pinned QmSAuGU5xt5SdR2ca2EDgeHFATSrAQhTfTYpYs9K9qmqED recursively

$ ipfs pin add --progress /ipns/ipfs.soundscape.net/music_recording/index.json
pinned QmcTiadA9jRMXx77tydPa6492QJAtjXkKkA4gERaFksy94 recursively

$ ipfs pin add --progress /ipns/ipfs.soundscape.net/music_composition/index.json
pinned QmTfqVaGVRnaPRQgYypGYXUvTK1UcDfK5VWYvU4rwK3m26 recursively

P.S. Sometimes when I ipfs pin add a file which is not on my node, the command just hangs there. I'm not sure why that once I access the file first (through curl or any browser), then ipfs pin add works fine. But it does not make sense: if I already get/access/download the file, I could just ipfs add the file and it would be automatically pinned.

ref:
https://ipfs.io/docs/commands/#ipfs-pin

Get Files

You have several ways to get files or folders from IPFS:

  • ipfs get dir-hash -o readable-dir-name
    • ipfs get QmbMQNcg8TTo5dXZPtuxbns1XVq6cZJaa7vNqZzeJpKwfk -o mysite
  • ipfs get file-hash -o readable-file-name.ext
    • ipfs get Qmd286K6pohQcTKYqnS1YhWrCiS4gz7Xi34sdwMe9USZ7u -o cat.jpg
  • ipfs get /ipfs/dir-hash/path/to/file.txt
    • ipfs get /ipfs/QmYwAPJzv5CZsnA625s3Xf2nemtYgPpHdWEz79ojWnPbdG/readme
  • ipfs get /ipns/example.com/path/to/file.txt
    • ipfs get /ipns/ipfs.soundscape.net/music_group/index.json

You could also access IPFS files through any public gateway:

  • curl https://ipfs.io/ipns/peer-id/path/to/file.txt
    • curl https://ipfs.io/ipns/QmfNy1th16zscbpxe8Q2EQdQkNFn7Y3Rp9kGZWL1EQDyw6/index.html
  • curl https://ipfs.io/ipns/example.com/path/to/file.txt
    • curl https://ipfs.io/ipns/ipfs.soundscape.net/music_group/index.json
  • curl http://127.0.0.1:8080/ipns/example.com/path/to/file.txt
    • curl http://127.0.0.1:8080/ipns/ipfs.soundscape.net/music_group/index.json

Download IPFS objects with ipfs get.

$ ipfs ls QmW2WQi7j6c7UgJTarActp7tDNikE4B2qXtFCfLPdsgaTQ
Qmd286K6pohQcTKYqnS1YhWrCiS4gz7Xi34sdwMe9USZ7u 443362 cat.jpg

# you could get a folder
$ ipfs get QmW2WQi7j6c7UgJTarActp7tDNikE4B2qXtFCfLPdsgaTQ
$ ls QmW2WQi7j6c7UgJTarActp7tDNikE4B2qXtFCfLPdsgaTQ
cat.jpg

# as well as a file
$ ipfs get Qmd286K6pohQcTKYqnS1YhWrCiS4gz7Xi34sdwMe9USZ7u -o cat.jpg

# get files and rename them
$ mkdir -p soundscape/music_group/ soundscape/music_album/ soundscape/music_recording/ soundscape/music_composition/ && \
  ipfs get /ipns/ipfs.soundscape.net/music_group/index.json -o soundscape/music_group/index.json; \
  ipfs get /ipns/ipfs.soundscape.net/music_album/index.json -o soundscape/music_album/index.json; \
  ipfs get /ipns/ipfs.soundscape.net/music_recording/index.json -o soundscape/music_recording/index.json; \
  ipfs get /ipns/ipfs.soundscape.net/music_composition/index.json -o soundscape/music_composition/index.json

# get whole folders
$ ipfs get /ipns/ipfs.soundscape.net/music_group; \
  ipfs get /ipns/ipfs.soundscape.net/music_album; \
  ipfs get /ipns/ipfs.soundscape.net/music_recording; \
  ipfs get /ipns/ipfs.soundscape.net/music_composition

ref:
https://ipfs.io/docs/commands/#ipfs-get
https://discuss.ipfs.io/t/trying-to-better-understand-the-pinning-concept/754

Display IPFS object data with ipfs cat.

$ ipfs cat Qmd286K6pohQcTKYqnS1YhWrCiS4gz7Xi34sdwMe9USZ7u > cat.jpg
$ ipfs cat QmS4ustL54uo8FzR9455qaxZwuMiUhyvMcX9Ba8nUH4uVv/readme

Publish a Website to IPNS

IPNS stands for InterPlanetary Naming System.

Everytime you change files under a folder, the hash of the folder also changes. So you need a static reference which always points to the latest hash of your folder. You could publish your static website (a folder) to IPNS with the static reference, which is your peer ID as well as the hash of your public key.

By default, every IPFS node has only one pair of private and public key. Therefore, you could only publish one folder with your peer ID. But you could add new keypairs through ipfs key gen and publish multiple folders.

$ ipfs add -r mysite
added QmeqHWZgvgx5C7T6DakX75CJDRgAUoSDZayLYrcnAP8Fma mysite/index.html
added QmUtuRphD9rJgRkfxwj7DcyFEAcSeH3Q1fK8nHxxoDiKK5 mysite

$ ipfs name publish QmUtuRphD9rJgRkfxwj7DcyFEAcSeH3Q1fK8nHxxoDiKK5
published to QmfNy1th16zscbpxe8Q2EQdQkNFn7Y3Rp9kGZWL1EQDyw6: /ipfs/QmUtuRphD9rJgRkfxwj7DcyFEAcSeH3Q1fK8nHxxoDiKK5

$ ipfs name resolve QmfNy1th16zscbpxe8Q2EQdQkNFn7Y3Rp9kGZWL1EQDyw6
/ipfs/QmUtuRphD9rJgRkfxwj7DcyFEAcSeH3Q1fK8nHxxoDiKK5

Click following links to see contents.

After you change something, publish it again with new hash.

$ vim mysite/index.html
$ ipfs add -r mysite
added QmNjbhdks8RUgDt6QiNFe5QGe2HrbCsq5FKda9D9hLVkkU mysite/index.html # changed
added QmbMQNcg8TTo5dXZPtuxbns1XVq6cZJaa7vNqZzeJpKwfk mysite            # changed

$ ipfs name publish QmbMQNcg8TTo5dXZPtuxbns1XVq6cZJaa7vNqZzeJpKwfk
published to QmfNy1th16zscbpxe8Q2EQdQkNFn7Y3Rp9kGZWL1EQDyw6: /ipfs/QmbMQNcg8TTo5dXZPtuxbns1XVq6cZJaa7vNqZzeJpKwfk

ref:
https://ipfs.io/docs/commands/#ipfs-name

Create a Domain Name Alias for Your Peer ID

The hash is not very friendly for humans. Fortunately, you could and probably should associate a domain name with your peer ID.

First, you need to add a TXT record whose value is dnslink=/ipns/YOUR_PEER_ID to your domain name. In the following article, we assume the domain name you choose is ipfs.kittenphile.com.

$ dig +short TXT ipfs.kittenphile.com
"dnslink=/ipns/QmfNy1th16zscbpxe8Q2EQdQkNFn7Y3Rp9kGZWL1EQDyw6"

$ ipfs name resolve -r ipfs.kittenphile.com
/ipfs/QmaE2DcNxGjPGPfzfTQuTBTW9D57abVSv319WqC89Av1y1

Click following links to see contents.

ref:
https://ipfs.io/docs/examples/example-viewer/example#../websites/README.md
https://hackernoon.com/ten-terrible-attempts-to-make-the-inter-planetary-file-system-human-friendly-e4e95df0c6fa

Public Gateway

If you have a public gateway and people retrieve files through it. Your public gateway fetches and stores the data, but it doesn't pin them. Files get removed with the next garbage collection run.

ref:
https://discuss.ipfs.io/t/public-facing-gateway-and-pinning/449