Zero-Knowledge Computation vs. Cryptoeconomic Security: Are Hybrid Models a Sustainable Transition or a Temporary Compromise?

Debate:

With the rapid advancement of zero-knowledge computation and proof aggregation, the “compute once, prove anywhere” model is becoming increasingly feasible. This raises a question: can traditional cryptoeconomic security models, which rely on redundancy, remain sustainable as these technologies improve?

As zero-knowledge proofs continue to drive down the costs of verifiable computation, are we moving toward a future where data availability and proof verification are all that’s required? If so, does redundancy still play a meaningful role, or will it become unnecessary?

Some suggest a hybrid model, combining zero-knowledge proofs with cryptoeconomic incentives, as a transitional approach. But is this truly a solution, or just a temporary impasse before one model ultimately prevails?

2 Likes

Can you further define “compute once, prove anywhere”? We want improvements and costs to lessen, these things free people and allow for further reach globally.

1 Like

Don’t we need redundancy for proof generation too? Doesn’t data availability become pointless if data can’t be made available the first place?

To me it seems like for resilience reasons, we’d want a network of hosts for proof generation too. Not just a single centralized and censor-able one. Some napkin math: Let’s assume we want a redundancy of 10. Currently there are around 1.5 mil validators on Ethereum but only around 1k on Polkadot. This means the tipping point for Ethereum is a proof computation overhead of 150’000x, but only 100x for Polkadot. So unless I fundamentally misunderstand “compute once, prove anywhere” in this context it seems highly questionable whether we reach that soon (or ever). Quantum computers are becoming “increasingly feasible” since decades too, yet here we are.

2 Likes

Like in Polkadot you hopefully only accept the state transition when the data is available and not before :wink: Redundancy for proof generation, as you already said, is more for being censorship resistant. On the validation side censorship resistance is probably better with ZK proofs as you don’t know which transaction inside it. I would say that the bet is that ZK proof generation gets fast enough to not have centralized services. While you could also argue that users right now don’t care as they are happily using any kind of centralized chain that could also just be a database.

2 Likes

Yeah, not entirely sure what “compute once, prove anywhere” means exactly here. A single (honest) collator is essentially “compute once, prove”? :upside_down_face:

I mean bottom line of what I try to depict. Even if we additionally account for 100 para-chains with 10 collators each we are at 2000 nodes in total. Its hard to see how anything should become unsustainable in the near future, unless ZK proof computation gets some orders of magnitude more efficient overnight.

I guess that it’s not the same. Validators still need to re-execute the PVF (the program) using the witness to verify the state transition’s correctness. The idea behind “compute once, prove anywhere” is that the program’s execution doesn’t need to be re-executed to validate its output. Instead, it can be verified using a “proof of execution” and the appropriate witness, which eliminates redundant computation and allows for validation without re-execution.

If you don’t need to re-execute the program to verify, then you only need data availability for the witness and proofs, a way to accumulate state in a verifiable manner (that could be local only) and a fork rule. This is assuming that proof generation becomes affordable enough—let’s imagine it can be generated in a reasonable time on a mobile phone… which is not yet the case :slight_smile:

Thanks for clarification!

FWIW, while I don’t see ZK computation overtaking crypto-economic security in terms of overall computation power (which I see as a proxy for $) just yet. I think verification without re-execution brings other beneficial properties. The verifier is not bound to the same hardware requirements than the executing party. Execution is also easier to parallelize. And potentially has privacy gains as bkchr already pointed out.

a way to accumulate state in a verifiable manner (that could be local only) and a fork rule. This is assuming that proof generation becomes affordable enough—let’s imagine it can be generated in a reasonable time on a mobile phone

That’s an interesting idea. Can you elaborate how on the state part? How would transaction ordering work?

I didn’t think about it deeply, but it’s an interesting conversation.

I guess that consensus about ordering seems unavoidable, and it needs to be Sybil-resistant (PoW, PoS). So, there’s no other option for this problem that I can imagine; but one can, for example, just pin to a PoW chain, like in Open Timestamps, or maybe a BFT hash calendar (with these, you not only have ordering but also time, up to some resolution). Still, you’ll only need consensus about the ordering, not about the validity of what is being ordered. The client, assuming the ordering solved by consensus, can verify the transitions and only accept the ones according to its local state. Anyway not sure if this is an improvement over other approaches… wdyt? :smiley: