Node Sync System
Nimiq implements a two-phase synchronization architecture that separates epoch-level state synchronization from real-time block processing. This design enables efficient sync strategies tailored to different node capabilities.
All nodes follow a two-phase synchronization pattern: macro sync followed by live sync. During macro sync, nodes download and verify macro blocks (checkpoints/epochs) to reach the current network state efficiently. Once macro sync completes, nodes transition to live sync to receive and validate the latest micro blocks in real-time. This modular approach allows each node type to optimize for its specific use case.
Sync Comparison Table
History Node | Full Node | Light Node | Pico Node | |
---|---|---|---|---|
Verification | Entire history | Full blocks and uses ZKP | Uses ZKPs | No ZKPs |
Macro Sync Method | History Macro Sync | Light Macro Sync | Light Macro Sync | Pico Macro Sync* |
Live Sync Method | Block Live Sync | State Live Sync | Block Live Sync | Block Live Sync |
Consensus Level | Fully verified | Verified | Verified | Trust-based |
Fallback | N/A | N/A | N/A | *Falls back to Light Macro Sync |
Sync Speed | Slower, full chain from genesis | Efficient, grows with chain length | Fast, includes proof verification | Faster, based on peer responses |
Web Client | Not supported | Not supported | Supported | Supported |
History Nodes
- Complete blockchain history from genesis
- Deep chain queries, historical analysis, and serving data to other nodes
- Can act as validators and provide historical data to the network
- Serve data to other nodes, generate and verify ZKPs, validate transactions and produce blocks (if validator)
- Can serve as prover nodes (ZKP generation) and validators
- Rely on other history nodes for initial sync
Full Nodes
- Complete current state with pruned history - maintains full validation capability
- Serve data to other nodes, generate and verify ZKPs, validate transactions and produce blocks (if validator)
- Can serve as prover nodes (ZKP generation) and validators
- Rely on full or history nodes for initial sync
Light Nodes
- Latest election block with ZKP and subsequent micro block headers only
- Transaction verification and sending with cryptographic security guarantees
- Browser/mobile deployment, web client integration (WASM support)
- Rely on full or history nodes for data availability and ZKP proofs
Pico Nodes
- Sync with the latest election block only (no historical data or ZKP verification)
- Ultra-fast startup with trust-based consensus and automatic fallback to secure sync
- Development environments, testing, and trusted network scenarios
- Rely on full or history nodes for data availability and fallback verification
Service Nodes
Prover Nodes: Generate zero-knowledge proofs for light and full node sync. Require significant computational resources.
Validator Nodes: Produce blocks and participate in consensus. Any node with a minimum of 100'000 NIM deposit that runs a full or history client can become a validator.
Architecture Components
Coordination Layer: Consensus
and Syncer
components manage sync lifecycle and peer relationships, enabling seamless transitions between sync phases.
Pluggable Strategies: MacroSync
and LiveSync
trait implementations allow optimization for different resource constraints.
Network Layer: Request/response and gossip protocols with async streams for efficient concurrent peer processing.
Queue Architecture: Automatic peer rotation, retry logic, and backpressure control ensure reliable data retrieval.
Further Reading
Understanding the System
- Architecture - Core components, data flow, and design patterns
- Node Sync System - Node lifecycles and sync mode selection
Implementation Details
- Traits and Abstractions - System design and component coordination
- Network Protocol - Message specifications and communication patterns
Sync Strategy Deep Dives
Macro Sync Strategies:
- History Macro Sync - Full chain download for history nodes
- Light Macro Sync - ZKP-verified state sync for full nodes
- Pico Macro Sync - Optimistic sync with automatic fallback
Live Sync Strategies:
- Block Live Sync - Real-time block synchronization
- State Live Sync - Complete state maintenance for full nodes
Sync Lifecycle
When a node starts, it follows this coordination pattern:
- Peer Discovery through the network layer
- Macro Sync Strategy Sync based on node configuration
- Live Sync Transition to maintain real-time synchronization
- Consensus Detection through peer agreement analysis
The Consensus
component orchestrates this process, while the Syncer
manages strategy-specific implementations through pluggable trait interfaces.
Developer Focus
This documentation targets developers working on sync logic and node implementations.
For node operation, see the Node Setup Guide.