Polkadot DA vs competition

I’ve read this before. I didn’t realize this could have been what @burdges was referring to.

My understanding is that the KZG polynomial also provides a proof it was encoded correctly, so no need for fraud proofs. I don’t actually understand it well enough to know why it still has to be 2D, though.

You could do a specific KZG trusted setup that bounds the polynomial degree to which you can commit. Yeah, they’d probably do this, given they’ve such fixed sized validator groups elsewhere.

Above, I confused myself thinking about a different situation, because elsewhere this does always not suffice (universal snarks, recycled setups, etc.)

It’s more suspect yes, but…

You’d still have fraud proofs about actual block contents, not just about encoding correctness. I think the original fraud proofs paper skirted this by having a set of validators who checked the erasure encoded block, and you assume one of them is honest and unDoSable, but you cannot assume ETH validators check these roll ups.

Yes, that’s my understanding as well. Otherwise, what would be the point of doing KZG commitments instead of just a merkle root?

Normally 2D scheme is only helpful for making fraud proofs smaller, but the reason for Danksharding is mentioned here: https://www.youtube.com/watch?v=e9oudTr5BE4&t=1243s

Anyway, I agree with Sofia we should explore the avenue of unbundling the DA of Polkadot to allow others to use it, be it Rollups on Ethereum or something else.

1 Like

It depend what you mean by “explore” of course…

It’s true you could avoid one big builder by separate encoder nodes doing different data on different rows, distribute those chunks, then encode the columns form those chunks, and finally distribute the column chunks. We do this by having different backing groups.

Also, we discussed doing “parity bit parachains” given by erasure coding across the erasure coded chunks, which link availability across parachains. It’s overkill though since we’re already using the 2/3rd honesty assumption in polkadot.

Anyone know how their actually distribute the chunks? It’s one validator per square in the grid? If so, then I suppose this video answers my concern about the pairings to know if you have your piece:

I’d guess each validator only verifies one chunk for its position in the grid, or an aggregated thing if they’ve more data. This is what he means be fewer samples. There are column constructor nodes who must do them all, but likely this can be batched for some savings.

Ain’t clear how they handle malicious row or column constructor nodes. Also they’ve higher total storage costs (4x) vs polkadot (3x). It’ll still be much heavier on CPU time than what we do, although less bad than I initially projected.

It’s less total bandwidth if you avoid approval checkers, but you’ve not proven validity for the contents, so you’re into optimistic roll up land, and require weighting periods, refunds, etc.

As I said above, we already have mechanisms, and other nice ideas, for lowering security to optimistic roll up like levels, so your first question should be “How does messaging look if you put optimistic roll ups on polkadot?”

An entertaining academic question: How do approvals change if we used celesia style 2d RS, with entwined parachain availability like ETH maybe does? We’d loose availability before inclusion, but it’s less bad than depending purely upon approval checkers, without doing any availability.

This paper may be relevant (it’s used in Espresso DA) https://eprint.iacr.org/2021/1544.pdf : “Information Dispersal with Provable Retrievability for Rollups”

It requires f+1 KZG vector commitments to be made by the encoder and each receiving node needs to compute 1.

1 Like

Yes! Can you (+ whoever else) share your diagram/bullet points on a high level design of how to unbundle and rebundle the “full ORU [OP Stack and/or Nitro] on Polkadot 2.0”, something like this:

  1. L2 Shim of ORU into Substrate: [insert plan with Gossamer+OP Stack vs Polkadot SDK … ]
  2. DA: Adapt Avail’s Substrate into Polkadot SDK, but add { … }
  3. ORU Fault Proofs: Scrap Frontier+Use L2 Shim of (1) on Polkadot L1 to run Cannon Fault Proofs by { … }
  4. Messaging: Implement OP Stack Message Queues, to be XCMP compatible with {…} but put XCM in L2 Shims of (1) by { … }
  5. L1+L2 Liquidity Bridge: Use (4) with Assethub, deposits+withdrawals work by { … }

If you believe its impossible, I recommend you suppress your inner critics (“Rust>Go”, “our DA is linear in # cores!”, “ORUs are low security!”, … “Its Frankenstein!”) and your inner desire to innovate – you’re a code surgeon doing a life saving transplant operation and don’t want to mess with the organs unless you absolutely have to. You can chart a different “post-transplant” plan with CoreJam, better DA, … later.