// inside head tag

Native Rollups: Where they are, and where they are going

Research

May 30, 2025
A joint article by Conor McMenamin and Luca Donno, a Nethermind x L2BEAT collaboration. The views and memes in this article are those of the authors. Originally published on L2Beat’s Medium at this link: Native Rollups: Where they are, and where they are going

Overview

Ethereum’s scalability roadmap has evolved considerably over the years — from Layer 1 sharding to a rollup-centric vision, and now toward something even more ambitious: native rollups. Promising to combine rollup autonomy with deep integration into Ethereum's base layer, native rollups aim to redefine what it means to “scale Ethereum” while retaining its core values of neutrality, modularity, and trust minimization.

This article presents a technically grounded examination of native rollups: what they are, how they aim to reshape Ethereum’s execution environment, and some of the open questions that remain.

In our own words, we present the following sections on native rollups.

  • What native rollups are.
  • The vision behind them.
  • Open questions.

What are Native Rollups?

Native rollups are rollups that replace their state transition function with EXECUTE, a precompile function that is intended to be EVM-equivalent, maintained and upgraded as part of Ethereum’s EVM. EVM-ception!

The initially proposed* EXECUTE precompile takes inputs pre_state_root,  post_state_root,  trace, and gas_used. It returns true if and only if:

  • trace is a well-formatted execution trace (e.g. a list of L2 transactions and corresponding state access proofs)
  • the stateless execution of trace starting from pre_state_root ends at post_state_root
  • the stateless execution of trace consumes exactly gas_used gas

With respect to running the precompile:

  • Validators can naively re-execute traces to enforce correctness of EXECUTE calls. This is comparable, in terms of compute and validator requirements, to manually updating the state through transaction execution.
  • Validators verify a SNARK proving valid execution. Note that even when EXECUTE is enforced by SNARKs, no explicit proof system or circuit need be enshrined in consensus, as the precompile does not take any explicit proof as input. Instead, each staking operator may be free to choose their favorite zkEL verifier client(s) similar to how EL clients are subjectively chosen today. Recent debates have been split on whether or not enshrine proof systems.

In addition to the EXECUTE precompile, rollups will need some additional functionalities that are rollup-specific, particularly to handle bridging, sequencing rules, fees, and token governance, among other things (we discuss these in the Open Challenges section).

*We expect further iterations will be seen before the exact details of EXECUTE are ossified.

Vision (i.e. Where Do Native Rollups Fit in the Ethereum Roadmap?)

Back in 2020, Vitalik published the rollup-centric roadmap, proposing a shift in focus towards supporting rollups as the “short-medium term” solution to Ethereum scaling, while hinting that they could very likely become the long-term solution too, abandoning “eth2” efforts of scaling L1 execution. Citing the blogpost:

This implies a “phase 1.5 and done” approach to eth2, where the base layer retrenches and focuses on doing a few things well - namely, consensus and data availability.

In this vision, all users and projects are encouraged to move to L2s, and L1 would just focus on supporting them. The downside of this approach is that rollups are forced to choose between being trust-minimized but immutable, or introducing a form of governance to enable feature upgrades. Even EVM-equivalent rollups cannot automatically upgrade with L1, which affects their security and neutrality, even in the presence of exit windows. In this context, the EXECUTE precompile and the introduction of native rollups become the obvious way to break the tradeoff and obtain governance-free upgrades for EVM environments.

That being said, Ethereum’s research focus has recently shifted back to a re-prioritization of L1 execution scaling. For this reason, some people have been questioning whether EVM-equivalent rollups would be needed in the first place, including native rollups, or if the rollup-centric roadmap has been a mistake in the first place.

The first natural question to ask is why such shift?

There are clear benefits in moderately scaling L1 to support rollups in aspects like censorship resistance via forced transactions, interoperability, mass exits, and proof verification.

However, after 5 years, no rollup has been able to provide enough security or neutrality to be comparable to what L1 can provide. For similar reasons, some have been advocating in the past for an EF-driven or public good rollups with no token and a plan to be Stage 2 at launch. While rollups are not to blame (…or at least not all of them), given that the necessary tech is genuinely hard to build, competition with other L1s has put more pressure into finding quicker and safer solutions compared to full reliance on external rollup teams.

The second question is if L1 can effectively scale to levels comparable to what can be achieved through rollup scaling?

Scaling, in general, is achieved by taking advantage of the asymmetry between raw execution and verification. Rollups accomplish this using ZK for execution and DAS for data. There’s no reason to believe that such improvements cannot be implemented on L1 directly, and efforts are already in progress: today full nodes verify the L1 chain by downloading all the necessary transactions calldata and re-executing them, some day they will verify ZK proofs and sample blobs.

The Value Add of Rollups

If L1 can be scaled to thousands of TPS, why do we need rollups? We can identify three reasons:

  1. The necessity to scale Ethereum has come much earlier than the maturity of ZK tech, when L1 could only support 12-13 TPS of real activity. To avoid enshrining immature proof systems, Ethereum took a “free market” approach allowing rollups to experiment, develop the tech, and take the risk if some proof system would turn out to be suboptimal. This has arguably been a large success given the number of bugs and the speed of development seen in ZK tech since its adoption by rollups. While still somewhat experimental, some approaches (like the use of zk-VMs compared to zk-EVMs) seem to be consolidating and might be close to becoming production-ready to be eventually used for L1.
  2. The only way to go from 10K to 10M TPS is through asynchronous execution shards. This can be achieved through enshrined execution shards on L1, similarly to what NEAR does, or through rollups.
  3. Rollups are custom extensions/testing grounds for the L1. Projects like Arbitrum, Starknet, Fuel Ignition or Eclipse integrate VMs different than Ethereum’s, all with their own advantages, which are infeasible for Ethereum to adopt in the short and medium term, or probably ever.

The role of L1 in this future could then be to serve as the neutral censorship resistant glue holding these diverse rollup communities together.

The Value Add of Native Rollups…

Enough about rollups in general, what about native rollups?

… vs the L1 and the Existing Rollup Ecosystem: L1 Security

Ethereum L1 is already seeing significant scaling through its existing rollup ecosystem. Native rollups and their accompanying tech stack stand to add some key benefits that appeal to certain rollup users and deployers. The main value add is the EVM-equivalent EXECUTE precompile, designed to remove the need for rollups to create, maintain and upgrade custom VMs. Aside from some rollup-specific logic, EXECUTE will replace the custom state transition functions that exist across the rollups on Ethereum today. Rollup deployers and users alike will be able to tap directly into the security of Ethereum, knowing that any bug in the EXECUTE precompile will be addressed in the same way as a bug in the EVM itself, through a hard-fork. Forking with the L1 isolates rollup risk to smart contract risk, greatly reducing the attack and defense surface for rollups. Secondary effects of this shared EVM include greater user confidence in rollups, easier rollup deployment, and shared developer resources to name but a few.

… vs L1 Execution Sharding: Autonomy*

Some readers may be unaware, but the benefits listed in the previous bullet are benefits that would be almost identical if rollups on Ethereum were replaced with execution shards. Execution shards split the state of the underlying blockchain into parallel environments capable of processing transactions independently. Moving from one shard to the other can be done via a coordination layer which would resemble the L1’s execution layer today. As execution shards are already implemented today outside of Ethereum, there is a viable path to adapting this tech for use in Ethereum itself.

What separates the shards from the rollups is the autonomy that rollups enable compared to spawning an execution shard. This was neatly discussed in a pair of posts on the topic of normal rollups vs execution shards. The conclusion therein is that rollups stand as autonomous hubs that can retain the right to decide on things like sequencing, bridging, forced inclusion, governance, gas token, etc. More than retaining these rights, the cultural and technological independence that rollups are allowed makes rollups ideal testing grounds for the L1.*This is a benefit that all rollups can bring.

Open Challenges

OH: Native rollups are a vibe.

To make native rollups a reality, there are still some big open questions. In this section we take a look at some interesting technical challenges and research areas that native rollups uncover.

How exactly native rollups will function is still up for debate. The EXECUTE precompile is just the tip of the iceberg.

Rollup Specific Functionalities

  • Bridging/how transactions are sequenced on the rollup:
    • Local native rollup transactions: We expect this will be handled by an inbox contract on the L1, i.e. this shouldn’t change for native rollups vs current rollups.
    • L1 deposit/signaling & forced inclusion transactions: These are rollup-specific functions currently not handled by the EVM. For example, op-geth (Optimism VM) requires a fork of geth (EVM) to handle deposits. The details are lacking on this particular functionality, but all of the discussions so far have focused on the addition of a DERIVE precompile/functionality to the EVM. This was hinted at in the original post “one can deploy minimal native and based rollups … with a simple derivation function … for special handling of sequencing, forced inclusion, or governance”. Concrete details on DERIVE are lacking, but the current proposal would require rollups to call this  DERIVE function before/during execution, loading in and executing signals from the L1 on the rollup state e.g to bridge assets into the native rollup. In this sense, provable calling of DERIVE would be part of the trace generation for native rollups. One interesting design choice/long-term vision question here is on whether:
      • Rollups would need to call DERIVEbut L1 would not. In this design, L1 would treat rollups as distinct execution environments, and vice versa.
      • Both the L1 and rollups would need to call the DERIVE precompile when executing transactions, with the L1 state being treated like a rollup (potentially with certain rollup-specific functionalities disabled).

  • Who can send transactions to the inbox: Native rollups have been promised full customizability of both normal sequencing (rules, selection) and forced inclusion. This seems like a classic use-case for a rollup’s DERIVE function.
  • How state transitions are proven: The initial proposal for native rollups was to have every EXECUTE call be accompanied by an off-chain proof which would be verified as part of the L1’s validity check. To enable this, execution traces are needed to be provided in DA instead of transaction data. This is an experimental concept which breaks compatibility with native validiums or state diff rollups, but in theory the benefits of this initial proposal would be to:
    • Remove the need to re-execute proof verification on-chain (expensive).
    • Allow each validator to run their own proof verifier, the same way the we have execution client diversity today. This avoids the enshrinement of specific proof systems, a credible neutralizoors dream.
    • Avoid the need to fork to fix proof system bugs.
  • Who proves state transitions: In the endgame for native rollups, finalizing a block requires every execute in an L1 slot to be proven. The initial proposal leans on the assumption that all of these proofs can be provided in one slot by an altruistic prover, making the system secure under “an altruistic-minority prover assumption.” To limit the proving requirements of this altruistic hero prover, an EXECUTE_CUMULATIVE_GAS_LIMIT is proposed, capping the total EXECUTE gas that can be consumed in a given slot.
  • Will there be rollup-specific fees?  In theory, each rollup should consume the same amount of L1 resources for the same set of transactions. However, each rollup may apply unique pre-/post-processing on transactions and/or transaction batches through DERIVE and/or smart contracts, based on the use case of the rollup e.g. one rollup may only accept encrypted transactions, another may enforce transaction ordering according to some weight function (e.g. priority gas ordering), others may enforce batch execution of some transactions.

Advantages and limitations of traces

The original native rollup proposal adopts full traces with state access proofs as the form of DA for native rollups, as they allow both for stateless re-execution and stateless ZK proof generation. By not enshrining any implementation, L1 nodes can decide which ZK verifier to run, including different implementations of the same proof system, or different proof systems altogether. Multiple proofs must therefore be generated and propagated via the p2p network, so any friction to proof generation can hinder proof diversity. With traces, a diverse set of altruistic (or indirectly incentivized) stateless provers can generate and share proofs for each EXECUTE invocation.

On the other hand, traces with state access proofs are significantly larger than just transaction data, which is the form of DA used by many rollups today. Because every node must download the entire trace, such form of DA is incompatible with sampling (e.g. PeerDAS), preventing the EXECUTE precompile from leveraging blobs. Native rollups transaction fees may thus significantly exceed those of non-native rollups, though the risk-free execution premium can still attract risk-averse users. On the other hand, traces can be posted to blobs if re-execution is made optional, where only those nodes that decide to download all blobs would be able to support it. External stateless provers would then be required to fall in this category.

If full re-execution support is dropped, a further step can be taken to replace traces with transaction data, allowing to exactly maintain today’s DA costs. Since traces are now not available onchain, stateful nodes would then be required to directly generate proofs, or to produce and share traces for stateless provers or clients who wish to statelessly re-execute, which can be sent on-demand. Altruistic provers, which we assume to be possible only if stateless, would depend on the availability of such nodes, potentially hurting diversity.

A more radical alternative replaces traces with arbitrary data commitments, allowing the support of native alt-DA L2s and native state diffs L2s. Under alt-DA, since data availability is not guaranteed, re-execution, which can be seen as a last-resort fallback, becomes impossible, as well as permissionless generation of traces for third party provers. Proof generation and diversity would then entirely rely on the transaction sender. Alternatively, requiring onchain proofs would automatically enforce proof diversity, at the cost of enshrining certain proof systems.

Interoperability

Interoperability has levels. Two levels that illustrate both the benefits and the challenges of native rollups are synchronous execution vs. synchronous composability.

1. Synchronously Executable: Transactions can be executed at the same time/in sequence.

Native rollups, current rollups, and L1, all offer the ability to atomically execute transactions at the same instant across their respective states. Knowing the execution of a set of transactions acting on multiple states requires tight sequencer coordination across these states, which is at its tightest when the sequencers for all states are shared. Based and shared sequencing protocols promise to provide synchronous execution of transactions across all of the states they sequence. Native rollups through customizable sequencing rules can opt in to being synchronously executable with the L1, and/or arbitrarily many other native rollups.

2. Synchronously Composable (across multiple execution environments): The ability to execute a transaction/transaction batch across multiple execution environments as if they were the same execution environment.

As the L1 is a single execution environment, synchronously composing transactions on L1 is trivial. Things become interesting and quite difficult when we want to synchronously compose transactions across two or more rollups.

To be synchronously composable requires some additional magic on top of synchronous execution, as synchronous execution only speaks to execution timing, and not about the state that is being executed on. For synchronous composability we need sequencer coordination and some snazzy atomic finality tech. At a high level, this tech involves:

  1. The ability to finalize all-or-none of the states that are trying to synchronously compose. AggLayer tries to achieve this through a coordinating layer (a meta-rollup) whose execution decides the finality of the rollups opting-in to AggLayer. Currently this involves rollups provisionally finalizing their state according to their local finality rules, with true finality requiring all of the AggLayer rollups to be provisionally finalized and non-conflicting; in other words, that the inter-rollup accounting has been done in a valid manner. See the AggLayer docs for a more detailed and up-to-date explanation. Note, AggLayer-rollup interoperability doesn’t require a shared sequencer, but synchronous composability does.
  2. Data structures for composing rollups to signal to the coordinating layer of the valid outputs and expected inputs to/from other composing rollups. By collecting the signals from a set of provisionally finalized rollups, the coordinating layer can relatively simply verify if all outputs and inputs match across all rollups. If the outputs and inputs match, the new rollup states become final. Otherwise, the state updates all revert.

In the synchronous composability paradigm, the coordinating layer would have complete control of how the assets of composing rollups are managed in their respective bridge(s). Although this places a heavy burden on the security of the smart contracts that control the bridge(s), the native rollup promise would be that the execution of the composing and coordinating rollup states would become as reliable as L1 execution.

Risk Isolation: Thanks to the EVM-equivalence of native rollups, execution guarantees of native rollups are as strong as those of Ethereum itself. This isolates the main risks of synchronous composability to bridge security and sequencer selection.

  • Bridge risk: This itself would be reduced to auditing a smart contract, which has contained risk that rollup deployers and users are equipped to understand, in theory.
  • Sequencer selection risk: A malicious shared sequencer can signal non-matching inputs and outputs to composing rollups, which would cause the state of composing rollups to roll-back. Rollups can handle this in a number of ways, including falling back to a default sequencer or disabling composability.

Throughput vs Composability

Scaling Ethereum from a 10k TPS L1 to a >1M TPS ecosystem is possible with rollups or execution shards, when each is treated as an asynchronous execution environment. When composability between rollups/shards is introduced, this throughput drops. Potentially a lot, actually.

Asynchronous execution rollups can execute without any dependencies on other rollup states.

Synchronously composable rollups must depend on the execution of a coordinating layer before their states can be finalized. This is one of several interdependencies that reduce the combined throughput of the system when synchronous composability is utilized.

Consider an L1 with 2 native rollups, all capable of processing 10k TPS, giving the Ethereum ecosystem a max throughput of approximately 30k TPS.

If the two rollups interoperate creating a temporary joint state, each composite transaction is executed on both states, meaning the cumulative throughput of these two rollups must be less than 20k TPS if any interop happens. Optimistically, the throughput of these rollups may be able to stay close to 20k - # composite transactions. However, given some coordination on L1 needs to happen to finalize the all-or-nothing joint state of the composing rollups, there is also some overhead introduced on the L1.

In the worst case scenario, where single threading of the joint state is necessary because of many transactional dependencies across both rollups, the cumulative throughput of the rollups drops to 10k TPS. Couple this with the overhead on L1 introduced by coordinating the composability of the native rollups (back-running the provisionally finalized state of the native rollups), the total throughput of the Ethereum ecosystem will drop to less than 20k TPS.

Generally speaking, for every pair of synchronously composing execution environments, if all transactions have composability dependencies, the throughput of this joint execution environment falls back to the throughput in the order of the lowest throughput execution environment.

When sets of execution environments are synchronously composing, the exact set of transactions will dictate the throughput of the joint system, but it is safe to say (you can bet your bottom dollar on it), every composite transaction between execution environments reduces the throughput of the shared system. Crucially though, composability of rollups and execution shards only needs to be performed on-demand, unlike the L1 which is always-composable, and thus throughput-limited.

For rollups, the inverse relationship between throughput and composability provides an interesting area of research, both in understanding the exact trade-off, and efficiently enabling the discovery of the throughput vs composability equilibrium point(s) e.g. through a market.

Latest articles