Skip to main content

Purpose and Scope

This document describes the multi-layer caching mechanisms and snapshot system used in StableNet to optimize state access.
The caching architecture ranges from in-memory LRU caches for frequently accessed data to a state snapshot system that provides fast access to account and storage data without traversing the trie.
For the underlying Merkle Patricia Trie data structure and database interfaces, see Database Layer and Merkle Patricia Trie.
For historical data management and archive strategies, see Ancient Store and Data Lifecycle.

StateDB Caching Architecture

StateDB acts as the top-level caching layer for all blockchain state and maintains multiple in-memory data structures to minimize repeated and expensive trie and database accesses during transaction execution.

Account and Storage Cache Maps

StateDB maintains the following cache maps to track state changes during block execution:
Cache MapTypePurpose
stateObjectsmap[common.Address]*stateObjectLive account objects modified during execution
accountsmap[common.Hash][]byteSlim RLP-encoded account data for commit
storagesmap[common.Hash]map[common.Hash][]byteStorage slot changes to be committed
accountsOriginmap[common.Address][]bytePre-change account state (for diff computation)
storagesOriginmap[common.Address]map[common.Hash][]bytePre-change storage state (for diff computation)
stateObjectsPendingmap[common.Address]struct{}Objects finalised but not yet applied to trie
stateObjectsDirtymap[common.Address]struct{}Objects modified in the current execution
stateObjectsDestructmap[common.Address]*types.StateAccountAccounts self-destructed within the block
stateObjects is the live state object cache maintained during execution of the current block.
Each account is loaded only once on first access and reused thereafter.
Account loading flow summary:
  1. Check the stateObjects cache
  2. If missing, attempt to load from snapshot
  3. If snapshot is unavailable or misses, load from trie
  4. Create a stateObject wrapper
  5. Cache in stateObjects and return

State Object Storage Caching

Each stateObject maintains a three-tier storage cache for storage slot access. Storage tiers and their roles:
  • dirtyStorage
    Slots modified in the current transaction. Cleared at transaction end.
  • pendingStorage
    Changes that are finalised but not yet committed to the trie.
  • originStorage
    Baseline state loaded from snapshot or trie.
This structure guarantees:
  • Transaction-level rollback capability
  • Avoidance of duplicate trie writes for the same slot
  • Accurate diff computation of state changes

State Snapshot System

The snapshot system provides a flat state representation that allows account and storage data to be queried without directly traversing the Merkle Patricia Trie.
It plays a critical role in both snap synchronization and normal block execution performance.

Snapshot Architecture

The snapshot system consists of three components:
  1. Disk Layer
    A persistent snapshot at a specific block height, holding flat mappings of
    account hash → account data, and storage hash → storage value.
  2. Diff Layers
    In-memory chained layers that store per-block state changes.
    Only modified accounts and storage slots are stored.
  3. Snapshot Tree
    Manages snapshots per state root and provides the Snapshot(root) interface.

Snapshot-Based State Access

When snapshots are enabled, StateDB queries state in the following order:
  • Account lookup: snap.Account(hash)
  • Storage lookup: snap.Storage(addrHash, slotHash)
  • Fallback to trie only on miss
Performance benefits of this approach:
  • No trie node decoding
  • No hash verification
  • Direct access to flat structures
  • Typically 10–100× faster account lookups

Trie Prefetcher

The trie prefetcher proactively loads trie nodes that are likely to be accessed during transaction execution, reducing commit latency. Execution flow:
  1. StartPrefetcher() is called at block execution start
  2. Modified storage slot information is collected during finalise()
  3. Worker goroutines asynchronously load trie nodes along those paths
  4. Cached tries are reused during IntermediateRoot() or commit
  5. StopPrefetcher() is called at block end for cleanup
The prefetcher operates in parallel with execution and does not increase transaction latency.

Journal and Snapshot Rollback

StateDB supports rolling back to previous states at any point during transaction execution via journal-based change tracking. Each state change is recorded as a journalEntry, including:
  • Balance changes
  • Nonce changes
  • Code changes
  • Storage updates
  • Selfdestruct operations
  • Account creation
When Snapshot() is called, the current journal length is recorded.
Calling RevertToSnapshot() replays the journal in reverse to restore state.
This mechanism is essential for:
  • EVM REVERT
  • Out-of-gas handling
  • Exception recovery
  • Speculative execution

State Commit and Cache Management

State commit proceeds in three stages.

Stage 1: Finalise

  • Clean up dirty objects
  • Process selfdestructs
  • Move dirtyStoragependingStorage
  • Clear the journal

Stage 2: IntermediateRoot

  • Hash storage tries of all pending objects
  • Update the account trie
  • Compute the intermediate state root

Stage 3: Commit

  • Commit all storage tries
  • Commit the account trie
  • Generate a NodeSet for modified nodes
  • Persist changes in batch via triedb.Update()

Fast Storage Deletion

Deleting large storage tries via full traversal is inefficient.
StateDB provides a fast deletion path using snapshots.
How it works:
  1. Iterate flat storage slots from the snapshot
  2. Build an in-memory StackTrie
  3. Generate delete markers for each node
  4. Fallback to the slow path if size limits are exceeded
  5. Return a NodeSet containing delete markers
This approach operates entirely in memory without disk access.

StateDB Copy Semantics

StateDB.Copy() is used for parallel execution and speculative state creation. Shared (read-only):
  • Database interface
  • Snapshot tree
  • Base tries
Deep-copied (writable):
  • State object maps
  • Change-tracking maps
  • Journal
  • Logs and preimages
  • Access lists and transient storage
This design enables:
  • Parallel transaction execution
  • Minimal memory duplication
  • Complete isolation between state instances

Performance Metrics and Instrumentation

StateDB collects detailed metrics across state processing. Example metrics include:
  • Account and storage read latency
  • Trie hashing time
  • Commit time
  • Snapshot access time
  • Number of updates and deletions
These metrics are used for:
  • Bottleneck identification
  • Cache hit-rate analysis
  • Snapshot effectiveness validation
  • Performance regression detection

Cleanup and Cache Eviction

Snapshot-Based Cleanup

  • Merge old diff layers
  • Refresh the disk layer
  • Optionally remove obsolete trie nodes

Cache Eviction Strategies

  • Trie node LRU caches
  • Separation of clean and dirty caches
  • Dirty caches evicted after commit

Memory Management

  • Journal cleanup at transaction end
  • Reset pending objects
  • Stop prefetcher
  • Structure sharing during StateDB copies
These mechanisms ensure that memory usage remains bounded even in long-running environments.