ZK Rollups as Parachains

ZK Rollups are an interesting scaling mechanism, and over the past couple weeks I’ve been considering how we might bring them to Polkadot as well. Here is a description of an MVP supporting ZK Rollups at the consensus layer by changing the execution cores mechanism slightly.

While I don’t think that ZK rollups are a one-size-fits-all (proving is expensive, blockspace not as flexible as Wasm), this is an interesting experiment worth prototyping.

Execution cores in Polkadot currently work as follows:

  1. assignments of parachains to cores are determined in advance
  2. collators for assigned parachains create blocks, sending them to backing validators
  3. backing validators check the blocks, and then produce a quorum allowing the parablock + metadata to be posted on-chain
  4. data is made available
  5. approval-checking leads to further checks for finality

This mechanism could be slightly modified to support ZK-Parachains

  1. same
  2. same
  3. the parablock is posted on-chain, and the ZK circuit encapsulating the parachain logic evaluates the parablock and checks its outputs
  4. data is made available
  5. ZK parablocks are skipped, as the parablock is already valid

With further improvements, such as exposing the Data-Availability layer as a service independent of execution cores, this flow could be further minimized and would easily enable things such as L3s.

22 Likes

Super interesting post! I also believe this idea has great potential, specially when/if ZK rollups are universal and performant enough.

For clarification, the main difference for ZK-parachains vs current approach is in the data available step 3, yes? My understanding is that with ZK-parachains the whole parachain block fits on a relay chain block which allows us to skip the availability and approval steps of the parachain protocol. Every relay chain validator will validate every parachain block by protocol construction.

ZK Rollups still need data availability because the state changes that result from a state transition must be available to the broader network in order to create subsequent state transitions.

The main difference is that we could in theory skip the approval-checking logic for ZK Rollups, as they could be checked by all full nodes when initially posted to the relay chain.

1 Like

Can you elaborate on what kind of data would need to be available, as compared to the PoV which is currently needed for Polkadot? I had thought / hoped that the amount of data that would need to be available would be constant size for ZK proofs, but it seems like you are implying the data that needs to be made available is the similar as needed for Polkadot today.

I’m no expert on ZK Rollups, but I believe it would be smaller than typical parachain PoVs, as it’d basically just be a post-state diff and the block header. Maybe the block body itself too, but that doesn’t seem necessary although useful for things like block explorers.

I think for ZK-parachains data that needs to be available won’t be constant size. Even though ZK-proofs could be the same size (although to become ZK scheme agnostic it is better to keep it flexible) transactions data will be different every time

1 Like

We do not imho need more “roll ups” per se, given polkadot already is an interactive “cut n choose roll up” under a byzantine threat model.

We do otoh need whole block optimizations (WBO), or maybe some future better name, by which I mean tricks which exploit the prover-verifier dynamic of blockchains. Among these, we have ideas like:

Algorithmic WBOs are tricks like verifiers sorting in linear time, because the block producer pre-sorts, except larger. We do have some in governance, but they could become more user-friendly, and some maybe demand storage improvements.

Business logic WBOs avoid problems like MEV by fixing some choices per parablock. As an example, uniswap parachain could fix one bid-ask price per pair for the whole block, perhaps requiring the collator select transaction by running a linear program solve, but simplifying verifiers.

At a high level, smart contracts are antithetical to WBOs so you can make parachains be more fair than their competitors built upon smart contract. As logic WBOs often simplify verifiers, they also make adopting zero-knowledge logic proofs easier. In particular, a zkUniswap becomes much faster and simpler if it fixes bid-ask prices for the whole block, given the LP solver is completely orthogonal to the ZKPs.

Cryptographic WBOs mean cryptographic batching, like SnarkPack, Schnorr half-aggregation, all the Plonk batching ideas, etc. ZK roll ups need tools like this, but face other problems, which parachains already solve.

As a rule, WBOs would benefit from actually passing local memory between on_initialize, individual transactions, and on_finalize (or at least having more efficient local storage, or at least having O(1) subtree drops).

I think with the notion of Data Availability as a Service we can focus entirely on ZK rollups that live on top of parachains.

That said, I think that there are a few not-specifically-technical reasons to support this at the base layer:

  • It’s not clear exactly how good ZK rollup scaling is going to get. Future-proofing Polkadot to be able to adapt to a full ZK scaling approach is a useful strategic hedge, even though Wasm parachains should be notably faster for the foreseeable future.
  • The development focus on ZK Rollups is substantive, and it’d be useful for Polkadot to make inroads at a protocol level for this developer audience. This should help cement Polkadot as an innovation hub, particularly among smart ZK folks.

@burdges I don’t fully understand what you mean by Whole Block Optimizations, but maybe you could make a separate post outlining how that might work from top-to-bottom?

We’ll scale much further than our current design using parallel relay chains, which become secure once you’ve like 1000 random validators per relay chain.

We’ll watch how zk roll ups develop of course. I kinda suspect special purpose SNARKs fair much better than general purpose ones, making the field less accessible, not more accessible. Among the general ones, zkSTARKs need considerable space, so they actually fit nicely within our framework.

We’ll hopefully lure EC zkSNARK dev teams onto Polkadot with Arkworks, as well as how we’ve already done the less sexy parts like availability. Yet, once they’re here I’d idealistically expect their best zk roll ups always cost more than parachain slots, driving a pivot towards user privacy.

We’ll see… If VCs foot the bill, and users accept latency, then zk roll ups could hang around even if they really cost more under the hood.

3 Likes