Skip to content

Latest commit

 

History

History
 
 

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 

Demo State Transition Function

This package shows how you can combine modules to build a custom state transition function. We provide several module implementations for you, and if you want additional functionality you can find a tutorial on writing custom modules here.

For purposes of this tutorial, the exact choices of modules don't matter at all - the steps to combine modules are identical no matter which ones you pick.

Overview

To get a fully functional rollup, we recommend implementing the State Transition Function interface ("STF") trait, which specifies your rollup's abstract logic. Second, there's a related struct called State Transition Runner ("STR") which tells a full node how to run your abstract STF on a concrete machine.

Implementing State Transition Function

As you recall, the Module System is primarily designed to help you implement the State Transition Function interface.

That interface is quite high-level - the only notion that it surfaces is that of a blob of rollup data. In the Module System, we work at a much lower level - with transactions signed by particular private keys. To fill the gap, there's a system called an StfBlueprint, which bridges between the two layers of abstraction.

The reason the StfBlueprint is called a "blueprint" is that it's generic. It allows you, the developer, to pass in several parameters that specify its exact behavior. In order, these generics are:

  1. Context: a per-transaction struct containing the message's sender. This also provides specs for storage access, so we use different Context implementations for Native and ZK execution. In ZK, we read values non-deterministically from hints and check them against a merkle tree, while in native mode we just read values straight from disk.
  2. Runtime: a collection of modules which make up the rollup's public interface

To implement your state transition function, you simply need to specify values for each of these fields.

In the remainder of this section, we'll walk you through implementing each of the remaining generics.

Implementing Runtime: Pick Your Modules

The final piece of the puzzle is your app's runtime. A runtime is just a list of modules - really, that's it! To add a new module to your app, just add an additional field to the runtime.

use sov_modules_api::{Genesis, DispatchCall, MessageCodec, Context};
use sov_modules_api::macros::expose_rpc;
use sov_rollup_interface::da::DaSpec;
#[cfg(feature = "native")]
use sov_accounts::{AccountsRpcImpl, AccountsRpcServer};
#[cfg(feature = "native")]
use sov_bank::{BankRpcImpl, BankRpcServer};
#[cfg(feature = "native")]
use sov_sequencer_registry::{SequencerRegistryRpcImpl, SequencerRegistryRpcServer};


#[cfg_attr(
    feature = "native",
    expose_rpc(DefaultContext)
)]
#[derive(Genesis, DispatchCall, MessageCodec)]
#[serialization(borsh::BorshDeserialize, borsh::BorshSerialize)]
pub struct MyRuntime<C: Context, Da: DaSpec> {
    #[allow(unused)]
    sequencer: sov_sequencer_registry::SequencerRegistry<C, Da>,

    #[allow(unused)]
    bank: sov_bank::Bank<C>,

    #[allow(unused)]
    accounts: sov_accounts::Accounts<C>,
}

As you can see in the above snippet, we derive four macros on the runtime. The Genesis macro generates initialization code for each module which will get run at your rollup's genesis. The other three macros allow your runtime to dispatch transactions and queries, and tell it which serialization scheme to use. We recommend borsh, since it's both fast and safe for hashing.

Implementing Hooks for the Runtime:

The next step is to implement Hooks for MyRuntime. Hooks are abstractions that allow for the injection of custom logic into the transaction processing pipeline.

There are two kind of hooks:

TxHooks, which has the following methods:

  1. pre_dispatch_tx_hook: Invoked immediately before each transaction is processed. This is a good time to apply stateful transaction verification, like checking the nonce.
  2. post_dispatch_tx_hook: Invoked immediately after each transaction is executed. This is a good place to perform any post-execution operations, like incrementing the nonce.

ApplyBlobHooks, which has the following methods:

  1. begin_blob_hook Invoked at the beginning of the apply_blob function, before the blob is deserialized into a group of transactions. This is a good time to ensure that the sequencer is properly bonded.
  2. end_blob_hook invoked at the end of the apply_blob function. This is a good place to reward sequencers.

To use the StfBlueprint, the runtime needs to provide implementation of these hooks which specifies what needs to happen at each of these four stages.

In this demo, we only rely on two modules which need access to the hooks - sov-accounts and sequencer-registry.

The sov-accounts module implements TxHooks because it needs to check and increment the sender nonce for every transaction. The sequencer-registry implements ApplyBlobHooks since it is responsible for managing the sequencer bond.

The implementation for MyRuntime is straightforward because we can leverage the existing hooks provided by sov-accounts and sequencer-registry and reuse them in our implementation.

impl<C: Context> TxHooks for Runtime<C> {
    type Context = C;

    fn pre_dispatch_tx_hook(
        &self,
        tx: Transaction<Self::Context>,
        working_set: &mut WorkingSet<C>,
    ) -> anyhow::Result<<Self::Context as Spec>::Address> {
        self.accounts.pre_dispatch_tx_hook(tx, working_set)
    }

    fn post_dispatch_tx_hook(
        &self,
        tx: &Transaction<Self::Context>,
        working_set: &mut WorkingSet<C>,
    ) -> anyhow::Result<()> {
        self.accounts.post_dispatch_tx_hook(tx, working_set)
    }
}
impl<C: Context> ApplyBlobHooks for Runtime<C> {
    type Context = C;

    fn lock_sequencer_bond(
        &self,
        sequencer: &[u8],
        working_set: &mut WorkingSet<C>,
    ) -> anyhow::Result<()> {
        self.sequencer.lock_sequencer_bond(sequencer, working_set)
    }

    fn reward_sequencer(
        &self,
        amount: u64,
        working_set: &mut WorkingSet<C>,
    ) -> anyhow::Result<()> {
        self.sequencer.reward_sequencer(amount, working_set)
    }
}

That's it - with those three structs implemented, you can plug them into your StfBlueprint and get a complete State Transition Function!

Exposing RPC

Your modules implement rpc methods via the rpc_gen macro, in order to enable the full-node to expose them, annotate the Runtime with expose_rpc. In the example above, you can see how to use the expose_rpc macro on the native Runtime.

Make Full Node Integrations Simpler with the State Transition Runner:

Now that we have an app, we want to be able to run it. For any custom state transition, your full node implementation is going to need a little customization. At the very least, you'll have to modify our demo-rollup example code to import your custom STF! But, when you're building an STF it's useful to stick as closely as possible to some standard interfaces. That way, you can minimize the changeset for your custom node implementation, which reduces the risk of bugs.

To help you integrate with full node implementations, we provide standard tools for initializing an app (StateTransitionRunner). In this section, we'll briefly show how to use them. Again it is not strictly required - just by implementing STF, you get the capability to integrate with DA layers and zkVMs. But, using these structures makes you more compatible with full node implementations out of the box.

Using State Transition Runner

The State Transition Runner struct contains logic related to initialization and running the rollup. It has just three methods:

  1. new - which consumes all the dependencies need for running the rollup.
  2. run - which runs the rollup.
  3. start_rpc_server - which exposes an RPC server.

Wrapping Up

Whew, that was a lot of information. To recap, implementing your own state transition function is as simple as plugging
a Runtime, a Transaction Verifier, and some Transaction Hooks into the pre-built app blueprint. Once you've done that, you can integrate with any DA layer and zkVM to create a Sovereign Rollup.