Elastic Scaling

We’ll see if others’ benchamrks match a16z’ press releases.

Relative to native execution of a RISC-V program, how much slower is the Jolt prover today? The answer is about 500,000 times slower.

You’ll still have oppertunity costs overhead from “decentralization” too, which zk roll up guys ignore so far. As I said above, decentralized provers should nerf any possible cost advantage for zk roll ups.

ZK likely has the scale economies to warrant specialized hardware. How much does that change things:

Bitcoin has ASICs in part because sha256 never changes. ASICs look high risk while algorithms change so fast.

Because Binius works over the field GF[2128] (rather than the scalar field of an elliptic curve like BN254, which is what Jolt uses today), field operations in the sum-check protocol will get much cheaper.

Awful lot of suposition here, so far those SNARKs have come in much slower, meaning Binus has lots of catchup first.

It’s all kinda irrelevant…

1st) We’ve our own conventional optimizations which we must deliver, whatever the zk roll up people deliever. Aside from conventional optimizations, we’d maybe gain another 2x or 3x by using better threshold randomness too, which complicates our protocol.

2nd) We’ve a much nicer computation model, which simplifies development. And the game is really just to ship applications that real people use.

3rd) In fact, we’ve access to variant computation models too, including some crazy nice oracles things, but at much worse costs. I’d wager some polkadot fork explores those, not polkadot itself.