Learn about Modular Data Availability for the OP Stack in Celestia

The Quantum Gravity Bridge (QGB) will allow OP Stack Rollups to directly leverage the DA guarantees provided by Celestia.

Written by: Javed Khan, celestia blog

** Compilation: Lujue Lin**

Introduction

Since its release last year, the OP Stack has gained significant traction among rollup developers. It is embraced by developers creating new rollups and by modular infrastructure providers like Caldera and Conduit, enabling developers to spin up their own rollups quickly.

As stated in last year's announcement, modularity is a fundamental aspect of the OP Stack vision:

Each layer of the OP Stack is described by a well-defined API, to be filled by a module for that layer. [...] Want to swap out Ethereum for Celestia as a data availability layer? Sure! Want to run Bitcoin as the ution layer? Why not!

Each layer of the OP Stack is described by a well-defined API, populated by modules at that layer. [...] Want to swap Ethereum for Celestia as a data availability layer? certainly! Want to run Bitcoin as the execution layer? why not!

Optimism’s fast-approaching Bedrock upgrade will modularize the OP Stack’s execution layer and proof system, enabling compatibility with future fraud and validity proofs.

Inspired by this, Celestia Labs has been focusing on further pushing the modularity of the OP Stack. So today, we're excited to announce the beta release of OP Stack's Modular Data Availability (DA) Interface, the first OP Stack Mod from OP Labs to focus on developer feedback. This interface allows developers to define DA layers and inherit security from any blockchain they like, be it Ethereum, Celestia or Bitcoin.

Developers can start experimenting today with a version of the OP Stack that uses Celestia for DA and "settles" on Ethereum. Caldera will soon release the Taro testnet, which allows developers and users to try out OP Stack's first public testnet using Modular DA.

The data availability layer is the foundation of the rollup architecture, ensuring the availability of the data required to independently verify the rollup chain. Below we explore the basics of data availability in the OP stack and how we modularize it to publish and retrieve data from L1 with well-defined DA interfaces.

Data Availability in the OP Stack: Today

How does OP Stack handle today's data availability? For our purposes, we delved into two basic components, the Rollup node and the Batcher, as described below.

For a broader understanding of how the rest of the OP Stack works behind the scenes, check out the Optimism documentation.

Rollup node

Rollup nodes are the components responsible for forking the correct L2 chain from L1 blocks (and their associated receipts). A rollup node retrieves L1 blocks, filters data transactions (usually in the form of transaction calldata), and derives the correct L2 chain from that data.

Batcher - batch submitter

Batch submitters, also known as batch processors, are entities that submit L2 sorter data to L1 for use by validators. Both the rollup node and the batcher work in a loop such that the newly submitted L2 block data by the batcher is retrieved from L1 by the rollup node and used to derive the next L2 block.

Each transaction submitted by a batch program contains calldata, which is L2 sequencer data divided into bytes called frames, the lowest level of abstraction for data in Optimism.

Modular DA interface for OP Stack

When creating the modular DA interface for the OP Stack, our goal was simple: to enable rollup developers to specify any blockchain as their data availability layer, be it Ethereum, Celestia, or Bitcoin. In the absence of such an interface, each integration of a new DA layer may require developers to implement and maintain a separate branch of the OP Stack.

The OP Stack already includes abstractions specifying L1Chain and L2Chain in the codebase, allowing us to model a new blockchain-agnostic interface for data availability chains, which we call DAChain.

Using the interface defined below, developers can implement DAChain to read and write data from any underlying blockchain or even a centralized backend like S3.

Writing stage

The following example of writing a Celestia implementation of the interface outlines the integration with the batch program:

SimpleTxManager.send, the function responsible for creating and sending the actual transaction, is modified to call WriteFrame to write the frame to Celestia and return a reference.

The reference is then submitted as calldata to the batch inbox address in place of the usual frame data.

Read phase

Here's an overview of the Celestia implementation of the interface that integrates with the rollup node:

DataFromEVMTransactions is the function responsible for returning frame data from the transaction list. It is modified to use the frame reference retrieved from the batch inbox calldata to actually fetch the frame and append it to the return data.

Note that the call to NamespacedData returns a byte slice array of all blobs submitted at the given BlockHeight, so we only return the TxIndex we are interested in.

Integrate Celestia as DA layer

*Diagram showing OP stack architecture compared to Celestia + OP stack integration. *

With some minor modifications to the Rollup node and the batch program, we can make the OP Stack use Celestia for DA.

This means that all data needed to fork the L2 chain can be made available on Celestia as local blob data instead of being published to Ethereum, although a small fixed-size frame reference is still published to Ethereum as batch program calldata. The frame reference is used to look up the corresponding frame on Celestia using the celestia-node‌ light node.

How to integrate and operate?

Writing stage

As mentioned above, the batch program submits L2 sequencer data as bytes called frames to the batch inbox contract address on Ethereum L1.

We preserve batcher and calldata transactions to guarantee frame ordering, but we replace frames in calldata with fixed-size frame references. What is a frame of reference? It is a reference to a Celestia data transaction that has successfully included frame data as part of Celestia.

We do this by embedding a celestia-node light node in the batch service. Whenever there is a new batch waiting to be submitted, we first submit the data transaction to Celestia using light nodes, and then submit only frame references in batchercalldata.

Read phase

In the read phase, we do the opposite, i.e. we use the frame reference in the batch transaction calldata to parse it and retrieve the corresponding actual frame data from Celestia. Likewise, we embed a celestia-node light node in the rollup node to query its transactions.

When forking the L2 chain, rollup nodes now transparently read data from light nodes and are able to continue building new blocks. Light nodes only download the data submitted by the rollup, instead of downloading the entire chain like Ethereum.

Outlook

Fraud proofs are a key part of Bedrock's post-Optimism roadmap, and we want to explore upgrading our OP Stack x Celestia integration to use fraud proofs on Ethereum mainnet.

To do this, we can leverage the Quantum Gravity Bridge (QGB), which relays cross-chain DA proofs to Ethereum to enable on-chain verification that aggregated data is available on Celestia so that aggregated data can be used in fraud proofs. This will allow OP Stack Rollup to directly leverage the DA guarantee provided by Celestia.

View Original
The content is for reference only, not a solicitation or offer. No investment, tax, or legal advice provided. See Disclaimer for more risks disclosure.
  • Reward
  • Comment
  • Share
Comment
0/400
No comments