Architecture

Component map, deployment modes, and integration model

Cocoon is built on a component framework that composes services through typed providers, dependency ordering, and an event/service bus. Every major capability — storage, execution, consensus, rollup drivers, indexers — is a component implementing a common lifecycle.

Component Map

The dependency graph below shows how components relate. The L1/L2 Node is the platform foundation; all other components depend on one or more of its sub-components.

spinner

Deployment Modes

The node mode is selected with --rollup.mode. All modes use the same binary; the component framework assembles the right set of services at startup.

Standard Erigon node. No rollup extensions. Current behavior is preserved — this mode is the default and requires no new configuration.

Components active: Storage, P2P, Consensus, Downloader, Sync, Execution, TxPool, RPC, optionally Caplin (embedded CL) and Miner.

Component Lifecycle

Every component implements a five-state machine. The component framework activates components in dependency order and deactivates them in reverse order on shutdown.

Configure → Initialize → Recover → Activate → Deactivate
Phase
What happens

Configure

Apply typed configuration options from ethconfig.Config

Initialize

Resolve dependencies, open resources (databases, connections)

Recover

Restore state after unclean shutdown (replay WAL, verify consistency)

Activate

Start background goroutines, subscribe to events, begin serving

Deactivate

Flush state, close connections, stop goroutines

The Component[P] generic type carries a typed provider P that holds the component's live state. Other components declare dependencies on specific provider types — the framework resolves and injects them automatically.

Integration Layers

Components communicate through three mechanisms depending on the coupling needed:

Mechanism
When to use
Example

Direct adapter

In-process, zero serialization

L2 reads L1 state via DirectL1DataSource; Caplin drives Execution via ExecutionClient

gRPC

Cross-process or remote services

Sentry, TxPool, Archive server in detached mode

Event bus

Async decoupled notifications

L1 finality → L2 derivation trigger; new block → RPC filter update

The direct adapter pattern is Erigon's existing approach for embedded services (see node/direct/execution_client.go). The same pattern is used for DirectL1DataSource in combined mode.

Build-Time Composition

Components are registered at compile time via init() and a components.cfg file. The node builder queries the registry at startup; if a required component was not compiled in, it fails fast with a clear error.

This means custom builds can include only the components they need — a minimal L1-only binary omits all rollup drivers, Polygon support, and archive extensions without code changes.

circle-info

See Component Framework for the full integration plan including the components.cfg code generation approach and the incremental extraction wave plan.

Last updated