Architecture
Component map, deployment modes, and integration model
Cocoon is built on a component framework that composes services through typed providers, dependency ordering, and an event/service bus. Every major capability — storage, execution, consensus, rollup drivers, indexers — is a component implementing a common lifecycle.
Component Map
The dependency graph below shows how components relate. The L1/L2 Node is the platform foundation; all other components depend on one or more of its sub-components.
Deployment Modes
The node mode is selected with --rollup.mode. All modes use the same binary; the component framework assembles the right set of services at startup.
Standard Erigon node. No rollup extensions. Current behavior is preserved — this mode is the default and requires no new configuration.
Components active: Storage, P2P, Consensus, Downloader, Sync, Execution, TxPool, RPC, optionally Caplin (embedded CL) and Miner.
L2 node connecting to an external L1 via JSON-RPC. The derivation pipeline fetches L1 data over the network.
Components active: L2 Storage, L2 Sync (Derivation + Bridge stages), L2 Execution, TxPool, RPC with chain-ID routing. L1DataSource is RPC-backed.
L1 and L2 run in a single process. The L2 derivation pipeline reads L1 state directly from the in-process L1 database — zero serialization overhead.
Components active: All L1 components + L2 Storage, L2 Sync, L2 Execution. L1DataSource is a direct in-process adapter. Single RPC server with chain-ID routing.
Key advantage: The DirectL1DataSource eliminates RPC round-trips for derivation. L1 finality notifications trigger L2 derivation via the existing shards.Notifications mechanism.
L2 node with an embedded validator client for consensus rollups. The second Caplin instance drives L2 block production via the same ExecutionClient interface that L1 Caplin uses.
Components active: L2 Caplin (beacon node), L2 Execution, embedded validator client (block proposer, attester, keystore manager).
Component Lifecycle
Every component implements a five-state machine. The component framework activates components in dependency order and deactivates them in reverse order on shutdown.
Configure → Initialize → Recover → Activate → DeactivateConfigure
Apply typed configuration options from ethconfig.Config
Initialize
Resolve dependencies, open resources (databases, connections)
Recover
Restore state after unclean shutdown (replay WAL, verify consistency)
Activate
Start background goroutines, subscribe to events, begin serving
Deactivate
Flush state, close connections, stop goroutines
The Component[P] generic type carries a typed provider P that holds the component's live state. Other components declare dependencies on specific provider types — the framework resolves and injects them automatically.
Integration Layers
Components communicate through three mechanisms depending on the coupling needed:
Direct adapter
In-process, zero serialization
L2 reads L1 state via DirectL1DataSource; Caplin drives Execution via ExecutionClient
gRPC
Cross-process or remote services
Sentry, TxPool, Archive server in detached mode
Event bus
Async decoupled notifications
L1 finality → L2 derivation trigger; new block → RPC filter update
The direct adapter pattern is Erigon's existing approach for embedded services (see node/direct/execution_client.go). The same pattern is used for DirectL1DataSource in combined mode.
Build-Time Composition
Components are registered at compile time via init() and a components.cfg file. The node builder queries the registry at startup; if a required component was not compiled in, it fails fast with a clear error.
This means custom builds can include only the components they need — a minimal L1-only binary omits all rollup drivers, Polygon support, and archive extensions without code changes.
See Component Framework for the full integration plan including the components.cfg code generation approach and the incremental extraction wave plan.
Last updated