L1/L2 Node
Run L1 and L2 in a single Erigon binary via a pluggable RollupDriver interface
Status: Design phase. Pending approval. Technical design: L1/L2 Node — Technical Design.
The L1/L2 Node extends Erigon to operate as a combined L1 and L2 node within a single binary. Today, running an L2 rollup alongside L1 requires deploying separate processes — an L1 execution client, L2 execution client, derivation service, and bridge — each with its own configuration, monitoring, and failure modes.
This design generalizes the pattern already proven by two embedded systems in Erigon: Caplin (an entire beacon chain running in-process alongside execution) and Polygon/Bor (a complete L2-like chain with bridge service and custom consensus as embedded services). The RollupDriver interface unifies both patterns and extends them to all rollup types.
Key Advantages
Zero-copy L1 data access: In combined mode, L2 derivation reads L1 state directly from the in-process L1 database using the DirectL1DataSource adapter — no RPC round-trips, no serialization overhead.
Single binary: One process to deploy, monitor, and upgrade. Shared EVM implementation, one torrent downloader, one RPC server with chain-ID routing.
Pluggable rollup types: The RollupDriver interface supports five rollup architectures — based, optimistic, consensus, native, and ZK — each as an independently deployable Driver implementation.
No breaking changes: --rollup.mode=l1 (the default) is identical to current Erigon behavior. Existing deployments, chain configurations, RPC APIs, and P2P protocols are untouched.
Supported Rollup Types
Based
L1 validators sequence L2 txns
Inherits L1 security
Simplest model; no external sequencer; natural fit for combined mode
Optimistic
External sequencer
Fraud proofs (challenge period)
Largest existing ecosystem (OP Stack, Arbitrum)
Consensus
Own PoS validator set via embedded Caplin
Casper FFG finality
Sovereign finality; configurable slot times (down to ~2s)
Native
Varies
L1 EVM re-executes via EXECUTE precompile
Shares L1 execution entirely; no separate pipeline
ZK
External sequencer
Validity proofs
Strongest security guarantees
Deployment Modes
Select the mode with --rollup.mode:
Standard Erigon, unchanged. No rollup extensions active. Existing deployments continue working without any configuration changes.
L2 node connecting to an external L1 via --rollup.l1.rpc. The derivation pipeline fetches L1 block data over RPC. Higher latency than combined mode, but separates L1 and L2 infrastructure.
L1 and L2 run in one process. L2 derivation reads L1 state directly from the in-process L1 database — no RPC round-trips. Single binary to deploy, monitor, and upgrade.
Key advantage: DirectL1DataSource reads L1 state directly from kv.TemporalRwDB in-process, following the same direct-adapter pattern as node/direct/execution_client.go. Zero serialization overhead.
L2 node with an embedded validator client. For consensus rollups running their own beacon chain. The validator client signs blocks and attestations via a direct adapter to the L2 Caplin instance.
Why Erigon Is Uniquely Positioned
Erigon already contains two proven examples of the exact pattern being generalized:
Caplin — an entire consensus layer protocol running in-process, communicating with the execution layer via
ExecutionClientdirect adapter. The beacon node drivesNewPayloadandForkchoiceUpdatedwith zero serialization overhead.Polygon/Bor — a complete L2-like chain with
polygon/sync.Service,bridge.Service, andheimdall.Servicerunning as embedded services alongside the execution layer. This is the exact pattern theRollupDriverinterface generalizes.
The three-way dispatch at backend.go:1553 already handles fundamentally different sync architectures (PoS, Bor, PoW). Generalizing this dispatch to use the Driver interface requires adding conditional component registration — no architectural change.
Additional platform advantages: pluggable consensus via rules.Engine, staged sync pipeline with composable stages, and the gRPC direct/remote adapter pattern for all embedded services.
Data Model
L1 and L2 use separate databases, not namespaced tables within one database. The temporal database uses block numbers as its time axis; L1 and L2 have independent block numbers, state histories, and snapshot schedules.
What is shared across L1 and L2 in combined mode: the downloader service (one torrent client), P2P networking infrastructure, the RPC server (routes by chain ID), and the EVM implementation.
Consensus Rollups: Embedded Beacon Chain
Consensus rollups run their own PoS consensus layer using a second Caplin instance as the Driver. This gives the L2 sovereign finality without designing a new consensus protocol.
The second Caplin instance uses the same YAML configuration format proven by Gnosis (5s slots) and other devnets, with parameters tuned for L2:
SECONDS_PER_SLOT
12s
2–6s
Faster L2 block times
SLOTS_PER_EPOCH
32
8–16
Faster epoch finality
MIN_GENESIS_ACTIVE_VALIDATOR_COUNT
16,384
16–256
Smaller validator set
With 2s slots and 8 slots/epoch, Casper FFG reaches finality in ~32 seconds. The L2 posts finalized state roots and BLS aggregate signatures to an L1 settlement contract, which verifies them as a light client of the L2 beacon chain.
See Technical Design for the full RollupDriver interface, the L2 staged sync pipeline stages, chain configuration extension, and the implementation phase plan.
Last updated