Build Your Own 100k+ TPS Chain Using Gravity SDK

A developer-focused guide explains how to build a fully functional, custom L1 blockchain application using the Gravity SDK framework.

This developer-focused guide explains how to build a fully functional, custom L1 blockchain application using the Gravity SDK framework. We walk through the end-to-end development process with a simple key-value store (KvStore) application as the running example (repository).

This guide covers:

  • Core Architecture: How Gravity SDK decouples complex consensus from application logic through a modular pipeline model.

  • Key Module Implementation: How to implement the TxPool and Executor/Committer modules that interface directly with Gravity SDK.

  • Full Lifecycle: The journey of a transaction—submission, ordering, execution, consensus, and durable persistence.


1. Core Architecture: The Decoupled Pipeline Model

At its core, building a blockchain node means constructing a distributed state machine. The design philosophy of Gravity SDK is to decompose this complexity into a clear, parallelizable pipeline model. This dramatically lowers the development barrier, allowing developers to focus on business logic rather than low-level networking or consensus algorithms.

The pipeline cleanly separates responsibilities between Gravity SDK and the application developer.

Gravity SDK’s Responsibilities (Consensus & Scheduling Layer)

  • Consensus Engine: The heart of Gravity SDK, responsible for the most challenging distributed-systems problems:

    • Transaction Ordering: Pulling transactions from the pool and determining their global order in a block.

    • Block Production & Broadcast: Packaging ordered transactions into blocks and propagating them across the network.

    • Networking: Managing node-to-node P2P communication.

    • Consensus Finalization: Running the AptosBFT consensus algorithm to ensure all honest nodes agree on both block contents and execution results.

  • BlockBufferManager: The bridge between the consensus layer and the execution layer. It tracks block states across their lifecycle (Ordered, Executed, Committed) and enables pipeline parallelism.

Developer’s Responsibilities (Application Logic / State Machine)

  • Transaction Pool (TxPool / Mempool): The entry point for user transactions, buffering them before consensus.

  • Executor: Where your business logic lives. It receives ordered blocks from the consensus engine, executes each transaction, and computes the resulting state transitions.

  • State / Storage: The blockchain’s ledger. It maintains the latest in-memory state (balances, KV pairs, etc.) and persists finalized changes to a database.

  • RPC Service: The application’s external interface, enabling users to query chain data or submit new transactions.

Key Advantages: Decoupling & Parallelism

By abstracting away networking and consensus complexity, Gravity SDK frees developers to focus solely on evolving the state machine through a few well-defined traits.

Furthermore, the three pipeline stages — Ordering (Consensus), Execution (Executor), and Commit (Committer) — run as independent, parallel tasks. This architecture enables high throughput (TPS) by fully leveraging concurrency.

This document will guide you through implementing the two most critical developer-side components: the transaction pool and the execution/commit logic. The architecture is illustrated in the diagram below:

image.png

2. Step 1: Implementing the Transaction Pool (TxPool) – Connecting to Transaction Sources

The transaction pool is the first stop for any transaction entering the consensus pipeline. Once a user submits a transaction (txn), the consensus engine continuously pulls from your implemented TxPool, selecting the most suitable transactions to package into a new block.

2.1 Understanding Gravity SDK’s Transaction Format (VerifiedTxn)

Regardless of how your application-level transaction (Transaction) is defined, it must be convertible into the standardized VerifiedTxn format that Gravity SDK recognizes. This structure bridges your application logic and the SDK.

// Standardized transaction format defined by Gravity SDK
pub struct VerifiedTxn {
    // The transaction payload, serialized into bytes
    pub bytes: Vec<u8>,
    // The sender’s account address, used for identity and filtering
    pub sender: ExternalAccountAddress,
    // Nonce to prevent replay attacks
    pub sequence_number: u64,
    // Chain ID, ensuring transactions are not reused across different chains
    pub chain_id: ExternalChainId,
    // Transaction hash, serving as the unique identifier.
    // OnceCell ensures the hash is computed only once for efficiency.
    #[serde(skip)]
    pub committed_hash: OnceCell<TxnHash>,
}

In our KvStore example, we define an into_verified() method to perform this conversion. The process typically involves two steps:

  1. Serialization: Use a serialization library (e.g., serde + bcs) to encode the custom transaction object into a byte stream (Vec<u8>).

  2. Metadata Population: Fill in essential metadata such as sender address, nonce, and other fields to complete the VerifiedTxn structure.

2.2 Implementing the TxPool Trait

The next step is to implement the TxPool trait provided by Gravity SDK. Among its methods, the most critical—and the only one you must implement—is best_txns.

pub trait TxPool: Send + Sync + 'static {
    // Returns a batch of the most suitable "Pending" transactions to be included in a block.
    fn best_txns(
        &self,
        filter: Option<Box<dyn Fn((ExternalAccountAddress, u64, TxnHash)) -> bool>>
    ) -> Box<dyn Iterator<Item = VerifiedTxn>>;
}

The purpose of this method is straightforward: return a batch of the highest-quality pending transactions from your pool for block inclusion.

Key detail: the filter function

Gravity SDK supplies a filter closure when calling best_txns. Apply it before returning any transactions; it removes transactions that the consensus engine has already cached or is actively processing.

Why is this necessary?

The consensus engine and your transaction pool operate asynchronously. After the engine pulls a batch of transactions, your pool may not change before the next pull. Without the filter, the same transactions could be repackaged into multiple blocks. The filter ensures uniqueness and prevents duplication.


2.3 Example: KvStore Implementation

Below is the best_txns implementation from the KvStore example, with detailed inline commentary:

impl TxPool for KvStoreTxPool {
    fn best_txns(
        &self,
        filter: Option<Box<dyn Fn((ExternalAccountAddress, u64, TxnHash)) -> bool>>,
    ) -> Box<dyn Iterator<Item = VerifiedTxn>> {
        // Step 1: Take a snapshot of all transactions in the mempool.
        // Locking and cloning ensures thread safety during iteration.
        let txns = (*self.mempool.mempool.lock().unwrap()).clone();
        let filter = Arc::new(filter); // Wrap the filter in Arc for thread-safe sharing.

        // Step 2: Flatten user -> transaction list into a single iterator.
        let res = Box::new(txns.into_iter().flat_map(move |(addr, user_txns)| {
            let filter_clone = filter.clone();
            user_txns.into_iter().filter_map(move |(seq, txn)| {
                let verified_txn = txn.raw_txn.clone().into_verified();

                // Step 3: Apply filter if provided.
                if let Some(f) = filter_clone.as_ref() {
                    // `f` expects (address, nonce, hash) as input.
                    // If it returns false, this txn should be discarded.
                    if !f((
                        addr.clone(),
                        seq,
                        TxnHash::new(verified_txn.committed_hash()), // Compute txn hash
                    )) {
                        return None;
                    }
                }

                // Step 4: If filter passes (or no filter exists), return the transaction.
                Some(verified_txn)
            })
        }));
        res
    }
}

Once the TxPool trait is implemented, the Gravity SDK consensus engine can automatically start pulling transactions from your pool. It will call best_txns periodically, order the returned transactions, and generate blocks.

3. Step 2: Implementing Execution Logic (Executor) – Processing Ordered Blocks

Once the consensus engine agrees on the order of a batch of transactions, it produces an Ordered Block—a set of transactions and their agreed-upon order, accepted by all honest nodes in the network.

The Executor continuously fetches these ordered blocks from the BlockBufferManager in a dedicated asynchronous task, executes the transactions, and updates the in-memory state.


3.1 execute_task Pseudocode Walkthrough

// Example: Execution task logic
pub async fn execute_task(state: State, pending_blocks: PendingBlocks) {
    loop {
        // Step 1: Fetch ordered blocks that have not yet been executed
        // from the BlockBufferManager. This is async; if no blocks are
        // available, it will wait.
        let ordered_blocks = get_block_buffer_manager()
            .get_ordered_blocks(start_num, max_size)
            .await;

        for (block, _) in ordered_blocks {
            // Step 2: Core execution logic
            // 2.1 Deserialize: Convert Gravity SDK's VerifiedTxn
            //     back into the application’s native transaction type.
            // 2.2 Apply business logic: Iterate over transactions
            //     and update the in-memory state.
            //     (e.g., in KvStore: `map.insert(key, value)`).
            // 2.3 Compute StateRoot: After all txns are executed,
            //     compute the new global state Merkle Root (StateRoot),
            //     a cryptographic commitment to the current state.
            let exec_res = Self::execute_block(block, &state, &pending_blocks).await;

            // Step 3: Report execution results back to BlockBufferManager.
            // This updates the block’s lifecycle from "Ordered" → "Executed".
            get_block_buffer_manager()
                .set_compute_res(block.block_meta.block_id, exec_res, ...)
                .await;

            // Step 4: Increment to the next block number.
            start_num += 1;
        }
    }
}

Key Insight: In-Memory Execution & Result Consensus.

At this stage, all state changes occur only in memory. Why? Although transactions have been executed, the results have not yet gone through consensus. Only after the network agrees on the execution results can the block be considered finalized.

The computed results (StateRoot/ExecHash) are sent back to Gravity SDK for a second round of consensus.

4. Step 3: Implementing Commit Logic (Committer) – Persisting the Final State

A block and its associated state changes can be considered finalized only after a majority of nodes have also reached consensus on the execution result (i.e., the StateRoot/ExecHash). At this point, it is safe to persist the data to disk. This is the role of the Commit stage.

Similar to the execution task, the commit logic runs in its own dedicated asynchronous task.

4.1 commit_task Pseudocode Walkthrough

// Example: Commit task logic
pub async fn commit_task(storage: Storage, pending_blocks: PendingBlocks) {
    loop {
        // Step 1: Fetch blocks whose execution results
        // have already reached consensus and are ready to commit.
        // Within Gravity SDK, these blocks are now marked as "Committed".
        let committed_blocks = get_block_buffer_manager()
            .get_committed_blocks(start_num, max_size)
            .await;

        for block_info in committed_blocks {
            // Step 2: Retrieve the previously executed block data
            // and state changes from the in-memory cache (e.g., pending_blocks).
            // This associates the execution phase results with the commit phase confirmation.

            // Step 3: Atomically persist the block data and state changes
            // to the underlying storage (e.g., RocksDB, Sled).
            // This is the first time the transaction lifecycle data
            // is permanently written to disk.
            Self::persist_block(block_info.num, &pending_blocks, storage.as_ref()).await;

            // (Optional) Step 4: Cleanup.
            // After persistence, remove the committed block data
            // from in-memory caches (pending_blocks) to free resources.
        }
    }
}

At this point, a transaction’s full lifecycle—from submission, to consensus on ordering, to execution, and finally to durable persistence—has been fully completed and permanently recorded on-chain.

5. Step 4: Assembly & Startup

The final step is to integrate all implemented modules (TxPool, Executor, Committer, Storage, etc.) and launch the blockchain node.

A typical startup workflow is as follows:

  1. Initialize Core Modules: Load node configuration, initialize storage, load the genesis block to establish the initial state, and create the transaction pool (Mempool) instance.

  2. Start the Consensus Engine: Using the node configuration and the TxPool instance, initialize and start the ConsensusEngine. This is the heart of Gravity SDK.

  3. Launch Background Tasks: Use an asynchronous runtime (e.g., tokio::spawn) to start the previously defined execute_task and commit_task as independent background services.

  4. Start RPC Service (not detailed in the example): Launch an RPC server to listen for external requests and inject transactions into the TxPool.

  5. Block the Main Thread: Keep the main thread alive to ensure the node runs continuously.


Example: KvStore Startup Code

fn main() -> Result<()> {
    // 1. Initialize storage, genesis state, etc.
    let storage = Arc::new(SledStorage::new("blockchain_db")?);
    let blockchain = Blockchain::new(storage.clone(), "genesis.json");
    let state = blockchain.get_latest_state();
    let mempool = KvStoreTxPool::new();
    let pending_blocks = Arc::new(DashMap::new()); // Shared data between execution and commit tasks

    // 2. Initialize and start the Gravity SDK Consensus Engine
    // Pass configuration, networking components, and our TxPool implementation
    let _consensus_engine = ConsensusEngine::init(
        ConsensusEngineArgs { ... },
        Box::new(mempool), // Instance implementing the TxPool Trait
    ).await;

    // 3. Launch execution task
    let state_clone = state.clone();
    let pending_blocks_clone = pending_blocks.clone();
    tokio::spawn(async move {
        PipelineExecutor::execute_task(start_num, None, state_clone, pending_blocks_clone).await;
    });

    // 4. Launch commit task
    let pending_blocks_clone2 = pending_blocks.clone();
    tokio::spawn(async move {
        PipelineExecutor::commit_task(start_num, None, storage, pending_blocks_clone2).await;
    });

    // 5. Block the main thread to keep the node running
    // Typically, the RPC service would be launched here and wait for termination signals
    // ...
    Ok(())
}

Multi-Node Cluster Configuration

When deploying a multi-node network, each validator node requires its own configuration file (validator.yaml), private keys, and an initial waypoint to ensure that all nodes boot from the same trusted genesis state. Refer to Gravity SDK’s official documentation for details.

6. Data Flow Recap: The Complete Lifecycle of a Transaction

Let’s use a simple key-value transaction to illustrate the full process:

  1. Submission: The user calls the RPC interface to submit a transaction Set("key", "value").

  2. Pooling: The RPC service receives the request, wraps it into the application’s native transaction object, and inserts it into the implemented KvStoreTxPool for processing.

  3. Ordering Consensus: The Gravity SDK ConsensusEngine pulls this transaction from the KvStoreTxPool (via the best_txns method), packages it together with other transactions into an ordered block, and reaches consensus across the network on its contents and order.

  4. Execution: The execute_task retrieves the ordered block from the BlockBufferManager. It deserializes the Set("key", "value") transaction, updates the key-value state in memory, and computes the new global StateRoot. This execution result is then reported back to the BlockBufferManager.

  5. Result Consensus: The ConsensusEngine runs a second round of consensus on the block’s execution result (StateRoot), ensuring that all nodes agree on the outcome.

  6. Committing: Once result consensus is achieved, the commit_task fetches this block from the BlockBufferManager and persists the state changes (the updated "key" value) together with the block information to the disk database.

At this point, the transaction is finalized, and the state update is permanently recorded.

Through this lifecycle, we show how Gravity SDK lets developers leverage its robust consensus framework while seamlessly integrating their custom application logic to build a simple yet fully functional blockchain application.

Last updated

Was this helpful?