System architecture
The Bridge Hub is a microservices-based system that monitors, indexes, exposes, and automatically claims cross-chain bridge transactions. It consists of four packages that work together around a shared MongoDB database.
┌──────────────────────────────────────────────────────────────────┐
│ External Systems │
├──────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Aggkit │ │ Blockchain │ │ MongoDB │ │
│ │ Bridge │ │ │ │ Database │ │
│ │ service │ │ │ │ │ │
│ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘ │
└─────────┼─────────────────┼─────────────────┼────────────────────┘
│ │ │
▼ ▼ ▼
┌──────────────────────────────────────────────────────────────────┐
│ Bridge Hub Packages │
│ │
│ ┌───────────────┐ │
│ │ COMMONS │ │
│ │ (Shared Types)│ │
│ └───────┬───────┘ │
│ │ │
│ ┌─────────────┼─────────────┐ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌──────────┐ ┌─────────┐ ┌───────────┐ │
│ │ CONSUMER │ │ API │ │AUTO-CLAIM │ │
│ │ (Indexer)│ │(Service)│ │(Claimer) │ │
│ └─────┬────┘ └────┬────┘ └─────┬─────┘ │
│ │ │ │ │
│ │ Writes │ Reads │ HTTP │
│ ▼ ▼ ▼ │
│ ┌──────────────────────────┐ ┌─────────────┐ │
│ │ MongoDB Database │ │ Blockchain │ │
│ └──────────────────────────┘ └─────────────┘ │
│ │
└──────────────────────────────────────────────────────────────────┘
The four packages serve distinct roles:
| Package | Layer | Responsibility |
|---|
| Commons | Foundation | Shared TypeScript types, interfaces, and schemas used by all other packages |
| Consumer | Data ingestion | Polls AggKit Bridge Service APIs to index bridge transactions into MongoDB |
| API | Service | Exposes indexed transaction data and proxies claim proofs over a REST interface |
| Auto-Claim | Automation | Polls the API for claimable transactions and submits claim transactions on-chain |
Commons acts as the foundation layer. It contains pure TypeScript types with no runtime code, providing type-safe contracts that the other three packages depend on. Consumer writes data into MongoDB. API reads from the same database and serves it to clients. Auto-Claim consumes the API over HTTP and interacts with the blockchain to finalize claims.
Consumer internals
The Consumer package runs as a single Node.js process per network. It contains two components that together run four cron jobs.
BridgeAPIConsumer
This component runs three cron jobs that fetch data from the AggKit Bridge Service:
-
bridgesCron: Polls AggKit for new bridge deposit transactions. Each new deposit is inserted into the
transactions collection with a status of BRIDGED. The cron tracks its progress by updating lastIndexedBridgeDepositCount in the metadata collection.
-
claimsCron: Polls AggKit for claim events that have occurred on-chain. When a claim is detected, the corresponding transaction is updated to
CLAIMED and the claimTransactionHash and timestamp are recorded. Progress is tracked via lastIndexedClaimBlockNumber.
-
mappingsCron: Polls AggKit for token mapping events. New or updated mappings are upserted into the
mappings collection. Progress is tracked via lastIndexedMappingBlockNumber.
ClaimReadinessConsumer
This component runs a single cron job:
- readyToClaimCron: Checks the L1 info tree data from AggKit and compares it against transactions that are currently in
BRIDGED status. When a transaction becomes claimable, the cron updates it to READY_TO_CLAIM and sets the leafIndexForProof field needed for merkle proof generation.
Consumer data flow
Aggkit Bridge Service (per network)
│
├── /bridges API ────▶ bridgesCron ────▶ transactions collection (BRIDGED)
│
├── /claims API ─────▶ claimsCron ─────▶ transactions collection (CLAIMED)
│
├── /mappings API ───▶ mappingsCron ───▶ mappings collection
│
└── /l1-info-tree ───▶ readyToClaimCron ▶ transactions collection (READY_TO_CLAIM)
All four cron jobs run at configured intervals within the same process. The metadata collection tracks each cron’s indexing checkpoint so the consumer can resume from the correct position after a restart.
Multi-network deployment
In production, the Bridge Hub runs one consumer instance per source network being indexed. Each network connected to the AggLayer has a unique network ID (for example, 0 for Ethereum, 1 for Polygon zkEVM). A single shared API service reads from the database and serves all networks, while one auto-claim instance runs per destination network.
The consumer does not directly monitor the blockchain. It polls the AggKit Bridge Service APIs to fetch already-indexed data. Each chain’s AggKit Bridge Service is maintained by the chain operators and is external to the Bridge Hub deployment.
┌──────────────────────────────────────────────────────────────────────────────┐
│ AGGLAYER HUB API CLUSTER │
├──────────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────┐ ┌─────────────────────────────────┐ │
│ │ Aggkit │ │ netId_0 Consumer │ │
│ │ Bridge │──────▶ │ ┌───────────────────────────┐ │ │
│ │ Service │ │ │ bridgesCron │ │ │
│ │ (net 0) │ │ │ claimsCron │ │──┐ │
│ └─────────────┘ │ │ readyToClaimCron │ │ │ │
│ │ │ mappingsCron │ │ │ │
│ │ └───────────────────────────┘ │ │ │
│ └─────────────────────────────────┘ │ │
│ │ │
│ ┌─────────────┐ ┌─────────────────────────────────┐ │ │
│ │ Aggkit │ │ netId_1 Consumer │ │ │
│ │ Bridge │──────▶ │ ┌───────────────────────────┐ │ │ │
│ │ Service │ │ │ bridgesCron │ │ │ │
│ │ (net 1) │ │ │ claimsCron │ │──┤ │
│ └─────────────┘ │ │ readyToClaimCron │ │ │ │
│ │ │ mappingsCron │ │ │ │
│ │ └───────────────────────────┘ │ │ │
│ └─────────────────────────────────┘ │ │
│ │ │
│ ┌─────────────┐ ┌─────────────────────────────────┐ │ │
│ │ Aggkit │ │ netId_n Consumer │ │ │
│ │ Bridge │──────▶ │ ┌───────────────────────────┐ │ │ │
│ │ Service │ │ │ bridgesCron │ │ │ │
│ │ (net n) │ │ │ claimsCron │ │──┤ │
│ └─────────────┘ │ │ readyToClaimCron │ │ │ │
│ │ │ mappingsCron │ │ │ │
│ │ └───────────────────────────┘ │ │ │
│ └─────────────────────────────────┘ │ │
│ │ │
│ ▼ │
│ ┌────────────────────────────────────────────┐ │
│ │ MongoDB Database │ │
│ │ ┌──────────────────────────────────────┐ │ │
│ │ │ Collections (per environment): │ │ │
│ │ │ • bridge_hub_api_transactions │ │ │
│ │ │ • bridge_hub_api_mappings │ │ │
│ │ │ • bridge_hub_api_metadata │ │ │
│ │ └──────────────────────────────────────┘ │ │
│ └────────────┬───────────────────────────────┘ │
│ │ │
│ ▼ Reads from │
│ ┌────────────────────────────────────────────┐ │
│ │ API Service │ │
│ │ ┌──────────────────────────────────────┐ │ │
│ │ │ /transactions │ │ │
│ │ │ /token-mappings │ │ │
│ │ │ /token-metadata │ │ │
│ │ │ /claim-proof (proxies to Aggkit) │ │───┐ │
│ │ └──────────────────────────────────────┘ │ │ │
│ └────────────────────────────────────────────┘ │ │
│ │ │
│ HTTP Calls │ │
│ ┌─────────────────────────────────────────────┐ │ │
│ │ Auto-Claim Service (per dest network) │◀─┘ │
│ │ ┌──────────────────────────────────────┐ │ │
│ │ │ Polls /transactions │ │ │
│ │ │ Fetches /claim-proof │ │ │
│ │ │ Submits claims to blockchain │ │ │
│ │ └──────────────────────────────────────┘ │ │
│ └─────────────────────────────────────────────┘ │
│ │
└──────────────────────────────────────────────────────────────────────────────┘
Key deployment points:
- Consumer instances: One per source network being indexed (
netId_0, netId_1, and so on).
- Shared database: All consumers write to the same MongoDB instance.
- Single API: One API service reads from the database and serves all networks.
- Auto-Claim deployment: One instance per destination network you want to auto-claim for.
Database design
Bridge Hub uses a single shared MongoDB instance. Collections are organized by environment using the naming convention bridge_hub_api_[type]_[environment], where the environment suffix is omitted for mainnet, _testnet for testnet, and _devnet for development.
Collections
The database contains three collection types:
-
transactions: Stores all bridge transactions across all networks. Modified by
bridgesCron (upserts), claimsCron (status updates), and readyToClaimCron (status updates). Transactions move through the statuses BRIDGED, LEAF_INCLUDED, READY_TO_CLAIM, and CLAIMED.
-
mappings: Stores token address mappings between AggLayer networks. Modified by
mappingsCron. Each record maps an origin token address on one network to its wrapped token address on another.
-
metadata: Tracks indexing progress per network. Each cron job updates its own checkpoint field in this collection. One document exists per network ID being indexed.
Transaction document schema
{
_id: ObjectId, // MongoDB primary key
hubUID: String (unique), // Business key
// Network Information
sourceNetwork: Number, // Source chain ID
destinationNetwork: Number, // Destination chain ID
// Transaction Details
transactionHash: String, // Source transaction hash
blockNumber: Number, // Block number on source
timestamp: Number, // Unix timestamp
bridgeHash: String,
// Bridge Details
leafType: String, // "ASSET" or "MESSAGE"
originTokenNetwork: Number,
originTokenAddress: String,
receiverAddress: String,
fromAddress: String,
amount: String, // BigInt as string
depositCount: Number, // Deposit counter
// Claiming Details
leafIndexForProof: Number, // Index for merkle proof
globalIndex: String, // Global index as string
// Status Tracking
status: String, // BRIDGED, LEAF_INCLUDED, READY_TO_CLAIM, CLAIMED
lastUpdatedAt: Number, // Last update timestamp
// Claim Information (populated after claim)
claimTransactionHash: String,
claimBlockNumber: Number,
claimTimestamp: Number
}
Indexes
{
hubUID: 1 // Unique index
status: 1, // Query by status
{ sourceNetwork: 1, destinationNetwork: 1 }, // Filter by networks
depositCount: 1, // Order by deposit count
{ status: 1, destinationNetwork: 1 } // Combined index for common queries
}
The metadata collection is critical for operational resilience. When a consumer instance restarts after a crash, planned maintenance, or redeployment, it reads its metadata document to find the last indexed position for each cron job. Without this checkpoint data, the consumer would need to re-index from the beginning, duplicating hours or days of work.
Each metadata document tracks three resume points:
lastIndexedBridgeDepositCount: where bridgesCron should resume
lastIndexedClaimBlockNumber: where claimsCron should resume
lastIndexedMappingBlockNumber: where mappingsCron should resume
On startup, each cron reads its respective checkpoint and picks up exactly where it left off.
Data synchronization
The system maintains eventual consistency through three distinct data paths:
- Write path (Consumer to MongoDB): Consumers poll AggKit APIs and write new or updated records into MongoDB. All writes use upsert operations, making them idempotent. Duplicate events from AggKit are handled gracefully.
- Read path (API from MongoDB): The API service reads directly from MongoDB and serves the data over REST endpoints. Because the API is stateless and read-only, it can be scaled horizontally behind a load balancer.
- Claim path (Auto-Claim to Blockchain): The Auto-Claim service polls the API for transactions in
READY_TO_CLAIM status, fetches merkle proofs through the API’s /claim-proof endpoint, and submits claim transactions on-chain. Claims are processed sequentially to avoid nonce conflicts.
Consistency guarantees
- Transactions are immutable once created; only status fields are updated.
- Status updates are atomic at the document level.
- Duplicate events are handled via upsert, so re-processing the same data is safe.
- No distributed transactions are needed because all state lives in a single MongoDB instance.
- There is a small window of delay between an on-chain event occurring and the consumer indexing it. During this window, the API may serve slightly stale data. This is acceptable for the bridge use case, where transactions take time to become claimable regardless.