# Deployment Architecture

Cocoon's MVP stack runs as a set of Docker containers orchestrated via Docker Compose. This page describes each service, its role, and the network topology — both for a single-chain deployment and the two-chain interoperability demo.

## Services Overview

| Service         | Port | Image               | Description                                                                                |
| --------------- | ---- | ------------------- | ------------------------------------------------------------------------------------------ |
| `erigon`        | 8545 | custom Erigon build | Chain A — private EVM chain (Chain ID 33, 1-second Clique PoA blocks)                      |
| `erigon2`       | 8555 | custom Erigon build | Chain B — second private chain for interop demo (Chain ID 34)                              |
| `deploy`        | —    | Foundry one-shot    | Deploys all contracts to Chain A on first boot                                             |
| `deploy-chain2` | —    | Foundry one-shot    | Deploys contracts to Chain B for interop testing                                           |
| `backend`       | 8546 | Go service          | JSON-RPC proxy for Chain A: session auth, IBAN registry, audit log, permission enforcement |
| `backend2`      | 8556 | Go service          | JSON-RPC proxy for Chain B                                                                 |
| `user_db`       | 8548 | Python FastAPI      | Shared user identity, authentication, KYC, and session management service                  |
| `dashboard`     | 3000 | Next.js 14          | Admin dashboard: user management, KYC review, permissions, audit log                       |
| `frontend`      | 3001 | Next.js 14          | Investor portal for Chain A: portfolio, payments, swaps, MMF subscribe/redeem              |
| `frontend2`     | 3011 | Next.js 14          | Investor portal for Chain B (interop demo)                                                 |
| `explorer`      | 3002 | Next.js 14          | Block explorer: real-time blocks, transactions, address resolution                         |

{% hint style="info" %}
In the single-chain configuration (`docker compose up erigon deploy backend user_db dashboard frontend explorer`), `erigon2`, `backend2`, `deploy-chain2`, and `frontend2` are not started, reducing resource requirements significantly.
{% endhint %}

## Network Topology

```mermaid
flowchart TB
    subgraph Public["Public / Browser-facing"]
        Dashboard["Admin Dashboard\n:3000"]
        Frontend["Investor Portal\n:3001"]
        Explorer["Block Explorer\n:3002"]
    end

    subgraph Internal["Internal Services"]
        Backend["Backend Proxy\n:8546"]
        UserDB["User DB\n:8548"]
    end

    subgraph Chain["Private Chain"]
        Erigon["Erigon Node\n:8545 (RPC)\n:30303 (P2P)"]
    end

    Dashboard --> Backend
    Frontend --> Backend
    Frontend --> UserDB
    Dashboard --> UserDB
    Explorer --> Erigon
    Backend --> Erigon
    Backend --> UserDB
```

{% hint style="warning" %}
Port 8545 (Erigon RPC) must **never** be exposed to external networks. All external RPC traffic must pass through the backend proxy on port 8546, which enforces session validation, CORS, and audit logging. In production, bind Erigon to `127.0.0.1` and restrict Docker port publishing accordingly.
{% endhint %}

## Port Allocation

| Port     | Service               | Visibility    | Notes                                     |
| -------- | --------------------- | ------------- | ----------------------------------------- |
| **8545** | Erigon Chain A (RPC)  | Internal only | Direct access bypasses all auth and audit |
| **8546** | Backend proxy Chain A | Public        | Auth-enforced gateway for all RPC calls   |
| **8547** | Prover                | Internal      | ZK proof generation service               |
| **8548** | User DB               | Internal      | Shared by all services                    |
| **8551** | Erigon Engine API     | Internal      | Consensus/execution engine communication  |
| **8555** | Erigon Chain B (RPC)  | Internal only | Second chain for interop demo             |
| **8556** | Backend proxy Chain B | Internal      |                                           |
| **3000** | Admin dashboard       | Restricted    | Admin accounts only                       |
| **3001** | Investor portal (A)   | Authenticated | Session required                          |
| **3011** | Investor portal (B)   | Authenticated |                                           |
| **3002** | Block explorer        | Public (demo) | Can be restricted in production           |

## Startup Sequence

Docker Compose health checks enforce the correct initialization order:

```mermaid
flowchart LR
    E[erigon\nhealthy] --> D[deploy\nexits 0]
    D --> B[backend\nhealthy]
    B --> DA[dashboard\nrunning]
    B --> FE[frontend\nrunning]
    B --> EX[explorer\nrunning]
    E --> EX
    U[user_db\nhealthy] --> DA
    U --> FE
```

1. Erigon starts and waits until the node is healthy (accepting RPC calls).
2. The `deploy` one-shot container deploys all Solidity contracts and writes addresses to `deployments.json`.
3. The backend proxy starts, reading `deployments.json` to configure the `PermissionRegistry` address.
4. The user DB starts independently (it has no chain dependency).
5. Dashboard, frontend, and explorer start once backend and user DB are healthy.

If `deploy` exits with a non-zero code, the contracts were not deployed. Check logs with `docker compose logs deploy` before starting the other services.

## Contract Deployment

On first boot, the `deploy` container runs `contracts/deploy.sh`, which:

1. Deploys the core contracts: `PermissionRegistry`, `MMFToken`, `Stablecoin`, `StateAnchorRegistry`, `IdentityRegistry`, `ClaimTopicsRegistry`, `TrustedIssuersRegistry`
2. Deploys the Uniswap V3 infrastructure: `UniswapV3Factory`, `SwapRouter`, `NonfungiblePositionManager`, `QuoterV2`
3. Deploys six RWA tokens: `SPYToken`, `TLTToken`, `BTCToken`, `ETHToken`, `EURToken`, `CHFToken`
4. Creates 15 trading pools across all token pairs
5. Seeds initial liquidity
6. Writes all addresses to `deployments.json`

Contract addresses are stable across restarts (Anvil uses a deterministic deployer). A full reset with `docker compose down -v` will produce new addresses and require a frontend rebuild.

## Two-Chain Interoperability Setup

For the cross-chain demo, a second chain (Chain B, ID 34) runs alongside Chain A. The interop feature allows assets to be bridged atomically between the two chains via the `StateAnchorRegistry` contracts on both chains.

Additional environment variables required for interop:

```dotenv
INTEROP_ENABLED=true
INTEROP_ADMIN=0xYourAdminAddress
INTEROP_ADMIN_KEY=your_admin_private_key
PEER_CHAINS=34=http://erigon2:8545=/app/deployments.chain2.json
```

The interop listener on each backend proxy watches the peer chain for state anchor events and processes cross-chain messages.

## Data Volumes

| Volume         | Service  | Contains                                        |
| -------------- | -------- | ----------------------------------------------- |
| `erigon-data`  | erigon   | Chain A block data, state, chain DB             |
| `erigon2-data` | erigon2  | Chain B block data                              |
| `userdb-data`  | user\_db | User accounts, sessions, KYC documents (SQLite) |
| `backend-data` | backend  | Audit log SQLite database                       |
| `prover-data`  | prover   | Generated ZK proofs, proof queue                |

{% hint style="warning" %}
`docker compose down -v` deletes all volumes. Chain A and B data, all user accounts, KYC submissions, and audit logs will be lost. Use only for a clean development reset.
{% endhint %}

## Resource Requirements

| Configuration        | RAM               | Disk  | Notes               |
| -------------------- | ----------------- | ----- | ------------------- |
| Single chain         | 8 GB minimum      | 10 GB | Chain A only        |
| Single chain         | 16 GB recommended | 20 GB | With prover enabled |
| Dual chain (interop) | 16 GB minimum     | 30 GB | Chains A + B        |

The prover is the most CPU-intensive service. For local development without proof generation, set `PROVER_BACKEND=mock` to skip actual proof computation.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://cocoon.erigon.tech/operators/deployment-architecture.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
