Skip to content

Commit

Permalink
Merge pull request #349 from subsquid/misc-fixes-by-abernatskiy-dec26
Browse files Browse the repository at this point in the history
Misc fixes by abernatskiy dec26
  • Loading branch information
abernatskiy authored Dec 26, 2023
2 parents 75a4b34 + c227126 commit 3ef066f
Show file tree
Hide file tree
Showing 4 changed files with 14 additions and 14 deletions.
2 changes: 2 additions & 0 deletions docs/cloud/resources/best-practices.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@ description: Checklist for going to production

Here is a list of items to check out before you deploy your squid for use in production:

* Make sure that you use [batch processing](/sdk/resources/basics/batch-processing) throughout your code.

* If your squid [saves its data to a database](/sdk/resources/persisting-data/typeorm), make sure your [schema](/sdk/reference/schema-file) has [`@index` decorators](/sdk/reference/schema-file/indexes-and-constraints) for all entities that will be looked up frequently.

* If your squid serves a [GraphQL API](/sdk/resources/graphql-server), consider:
Expand Down
8 changes: 7 additions & 1 deletion docs/sdk/how-to-start/squid-development.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -584,7 +584,13 @@ The alternative is to do the same steps in a different order:
## Scaling up
If you're developing a large squid, you may want to take a look at the [Giant Squid Explorer](https://github.com/subsquid-labs/giant-squid-explorer) and [Thena Squid](https://github.com/subsquid-labs/thena-squid) repos for inspiration.
If you're developing a large squid, make sure to use [batch processing](/sdk/resources/basics/batch-processing) throughout your code.
A common mistake is to make handlers for individual event logs or transactions; for updates that require data retrieval that results in lots of small database lookups and ultimately in poor syncing performance. Collect all the relevant data and process it at once. A simple architecture of that type is discussed in the [BAYC tutorial](/sdk/tutorials/bayc).
You should also check the [Cloud best practices page](/cloud/resources/best-practices) even if you're not planning to deploy to [Subsquid Cloud](/cloud) - it contains valuable performance-related tips.
For complete examples of complex squids take a look at the [Giant Squid Explorer](https://github.com/subsquid-labs/giant-squid-explorer) and [Thena Squid](https://github.com/subsquid-labs/thena-squid) repos.
## Next steps
Expand Down
Binary file modified docs/subgraphs-support-configuration.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
18 changes: 5 additions & 13 deletions docs/subgraphs-support.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,9 +35,11 @@ The easiest way to run a subgraph with Subsquid Firehose to use our [graph-node-

![Configuring the environment](subgraphs-support-configuration.gif)

You will be asked to select a network and provide a node RPC endpoint. You can pick any network from our [supported EVM networks](/subsquid-network/reference/evm-networks); networks that are not currently [supported by TheGraph](https://thegraph.com/docs/en/developing/supported-networks/) will be available their under Subsquid names.
You will be asked to select a network. You can pick any network from our [supported EVM networks](/subsquid-network/reference/evm-networks); networks that are not currently [supported by TheGraph](https://thegraph.com/docs/en/developing/supported-networks/) will be available their under Subsquid names.

The RPC endpoint will only be used to sync a few thousands of blocks at the chain end, so it does not have to be a paid one. However, `firehose-grpc` does not limit its request rate yet, so using a public RPC might result in a cooldown.
Optionally you can also provide an RPC endpoint. If you do, it will be used to sync a few thousands of blocks at the chain end, so it does not have to be a paid one. However, `firehose-grpc` does not limit its request rate yet, so using a public RPC might result in a cooldown.

If you do not provide an RPC endpoint, your subgraph deployments will be a few thousands of blocks behind the chain head.

3. Download and deploy your subgraph of choice! For example, if you configured the environment to use Ethereum mainnet (`eth-mainnet`), you can deploy the well known Gravatar subgraph:
```bash
Expand All @@ -59,16 +61,6 @@ The easiest way to run a subgraph with Subsquid Firehose to use our [graph-node-
```
GraphiQL playground will be available at [http://127.0.0.1:8000/subgraphs/name/example/graphql](http://127.0.0.1:8000/subgraphs/name/example/graphql).

## Disabling RPC ingestion

If you would like to use `firehose-grpc` without the optional real-time RPC data-source, run
```bash
npm run configure -- --disable-rpc-ingestion
```
One would still have to provide a placeholder RPC URL to the config, but it won't be used for the data ingestion, so a public RPC will suffice.

Disabling RPC ingestion introduces a delay of several thousands of blocks between the highest block available to the subgraph and the actual block head, but completely eliminates the need to care about the RPC endpoints.

## Troubleshooting

Do not hesitate to let us know about any issues (whether listed here or not) at the [SquidDevs Telegram chat](https://t.me/HydraDevs).
Expand All @@ -77,4 +69,4 @@ Do not hesitate to let us know about any issues (whether listed here or not) at
```
thread 'tokio-runtime-worker' panicked at 'called `Option::unwrap()` on a `None` value', src/ds_rpc.rs:556:80
```
errors in the `graph-node-setup-firehose` container logs, that likely means that the chain RPC is not fully Ethereum-compatible and a workaround is not yet implemented in `firehose-grpc`. You can still sync your subgraph with [RPC ingestion disabled](#disabling-rpc-ingestion).
errors in the `graph-node-setup-firehose` container logs, that likely means that the chain RPC is not fully Ethereum-compatible and a workaround is not yet implemented in `firehose-grpc`. You can still sync your subgraph with RPC ingestion disabled.

0 comments on commit 3ef066f

Please sign in to comment.