Skip to main content

What is the Chain Indexer Framework?

The Chain Indexer Framework is a blockchain data indexing framework for building event-driven data pipelines on EVM blockchains. It is built on Kafka and provides the core logic and helper functions needed to collect raw blockchain data, transform it, and make it queryable by a dApp backend. The framework handles three stages:
  1. Fetch raw blockchain data via RPC and store it in a Kafka stream.
  2. Transform the raw data using the framework’s helper functions, based on your dApp’s requirements.
  3. Store the transformed data in a database (such as Postgres or MongoDB) that your dApp’s API layer can query.

Why use it?

Without an indexer, querying blockchain data requires iterating through every block and transaction. This is slow, expensive in RPC calls, and becomes impractical as chain history grows. The Chain Indexer Framework solves this by indexing historical blocks once and maintaining a live stream of new events. Once indexed, the data is immediately accessible to any number of application layers without repeated RPC queries.

Key properties

PropertyDescription
Open sourceFork, modify, and host on your own infrastructure. No third-party rate limits.
TypeScriptEasy to install via NPM and integrate into existing Node.js projects.
Modular architectureProducers, transformers, and consumers are separate modules. Debug and extend each independently.
One-time historical indexingIndex historical blocks once; the data remains available in the Kafka data warehouse.
Event-triggered actionsSet triggers on specific blockchain events to drive notifications or UI updates.
ScalableHandles increased data volumes as your dApp grows.

Use cases

  • Wallet services: transaction history, balance history, and real-time updates.
  • dApp backends: real-time access to contract events and token transfers.
  • Analytics: monitoring smart contract interactions and token transfer patterns.
  • Cross-chain services: indexing data from multiple EVM chains for cross-chain features.
  • Oracle support: efficient access to specific onchain data points.
  • NFT marketplaces: tracking ownership changes, price history, and token attributes.

How it works

The framework uses a producer-transformer-consumer pipeline:
  • Producers connect to the blockchain RPC and push raw block and event data into a Kafka topic.
  • Transformers consume from Kafka, apply your dApp-specific logic, and output structured data.
  • Consumers read the structured data and write it to a queryable database.
Your dApp’s API layer then queries the database rather than the blockchain directly. For installation and usage, see Usage.