JAM Services and related system chains

Questions related to Jam services. I have read the Gray paper and still have questions. Apologies if I missed elsewhere.

My understanding of JAM is that it will extend service beyond traditional consensus into a whole bunch of other services. I kind of think of Cloud “as a service” models where you can pick and choose your infrastructure and other related services. With that said, I am trying to get an understanding of the exact service it and the related system parachains will offer. Right now, I am tracking the following:

  • parachain consensus
  • Optimistic rollup consensus
  • ZK rollup consensus
  • Asset registration and basically payment (Asset Hub)
  • Smart contracts execution tied to blockchains (plaza)
  • Bridging (Bridge Hub)
  • Smart contracts and basically any other computer code that can be ran as part of a job submission
  • Storage (I remember seeing a post about this here but not sure if it was approved and in process of being built). Can someone confirm still in process?

With all of that said, are there any other plans to add the following services:

  • Fraud proof generation for near instant optimistic rollup settlement
  • zk proof generation
  • data availability (I understand polkadot is in essence a data availability layer) but thinking more along the lines of a specialized service for other chains or if there were ways to fragment between JAM chain and system parachains for further scalability.
  • transaction bundling and block generation

Also, I know the concept was once brought up that it was theoretically possible to have nested relay chains for further scalability. Is that still a future design consideration for JAM as well? Obviously with a theoretical low end of 250K read/writes and the high end compute it will not be needed for a long time. But was curious.

1 Like

You should check out my SBC 2024 talk.

Polakdot is already a complete roll up that’s provably secure under byzantine assumptions (2/3rd honest + partial synchrony + synchrony for assignment messages). Polkadot parachains have much lower latency for cross shard communication than any other roll up design, like 10s of seconds.

As a comparisons…

Optimistic roll ups are not provably secure, wwell not under sane assumptions anyways. Instead, they need days or weeks of latency for cross shard communication.

A zk roll up provides a non-interactive proof under cryptographic assumptions, which makes them stronger than polkadot which only provides a proof under byzantine assumptions, but…

A zk roll ups cost like 1 million times as much CPU time as polkadot, typically lack decentralized liveness, and could not beat polkadot by much on bandwidht if they added decentralized liveness. Also zk roll ups have an hour or so of latency for cross shard communication, due to proofs generation.

Also zk roll ups are mostly non-recursive, so they still need byzantine assumptions like every otyher blockchain. Mina does recursive proofs so they actually improve this sopmewhat.

Bridges provide cross shard communication after finality, so higher latency than polkadot parachains, but much much faster than roll ups. Bridges need the byzantine assumptions for all bridgedchains though. In particular, cosmos kinda assumes that all zones are 2/3rd honest, which makes them vunlerable vs malicious zones…

Fraud proof generation for near instant optimistic rollup settlement

“Instant optimistic rollup settlement” means within the roll up, but does not mean low latency cross shard communications, so you still need a week or whatever for moving funds between shards.

Polkadot detects invalid erasure coding during the approval phase, exactly when it detects invalid blocks, and uses 1d Reed-Solomn here, which gives a byzantine friendly undecodable ratio. A 2d Reed-Solomn based “fraud proof” systems like Celestia have a terrible undecodable ratio, so they consume way more badwidth in their sampling phase.

it was theoretically possible to have nested relay chains for further scalability

Polkadot itself is an off-chain protocol, which validates blockchains, and uses one little chain for coordination. You cannot run polkadot on polkadot because polkadot itself is not a blockchain. Aka nested relay chains have never made any sense.

OmniLedger could scale most decentralized systems, by literally scaling the underlying byzantine assumptions. It requires good randomness, big shards of 1000 nodes, and a higher byzantine assumption of like 80% honest. We’ll do multiple relay chains in this way eventually, but not a prioirity right now.

An entertaining question: Is OmniLedger more bandwidht efficent than Celestia?

3 Likes

From that abstract:

“… security relies on the fact that it is prohibitively expensive in expectation for an adversary to make ELVES to accept a block that is not valid.”

Is this proved anywhere?

Otherwise, isn’t it an assumption?

1 Like

“Provably secure” means proven from some “reasonable” assumptions, with high probability.

We prove how paramaters impact adversaries success odds in soundness attacks. We then tune paramaters so that if each validator’s state is at least 1/20,000th of the value of what polkadot secures, then a 1/3rd adversary has negative expected returns from soundness attacks.

As in gambler’s ruin, it follows a 1/3rd adversary who attacks polkadot’s soundness expects to exhaust their DOTs before they succeed, which makes the value of polkadot secures irrelevant for adversaries’ success. It might not be irrelevant to value being secured of course.

In reality, we have the same governance weakness to anything else in capitalism, meaning you could do anything you like if you buy up enough DOTs and vote them. That’s the real vulnerability, but not relevant to the soundness analysis. In fact, “enough” being half of the DOTs, and while driving up the market price, would still be cheaper than soundness attacks.

3 Likes

I don’t believe that addresses the observation that, in the absence of a proof, there is no basis to claim that it is a fact that something is prohibitively expensive in expectation.

Until such a proof is provided any prudent observer is better served by treating such claims as an assumption.

Finally, I note that even if such a proof emerges there is still a question mark over whether Polkadot could ever achieve that value before the heat death of the universe.

Thanks for the info; however, with that said, it doesn’t really fully address the questions. I understand that Polkadot has better tech than ethereum. Maybe a bit more background. I was not asking the question because I thought zks, optimistic rollups, etc. were better. I was asking the questions above because what it “looks” like Polkadot is strategically doing is setting up “migration” infrastructure for the ethereum L2s. At some point, ethereum capacity maxes out because it can’t really scale. So if Polkadot has optimistic rollup consensus and zk rollup consensus then it can be a home for projects that get scaled out. With that said, this becomes easier to do if you have fraud proofs, zk solvers, and the other infrastructure in place to enable seamless transition.

I’d expect they’d stay on Ethereum so long as they accept those technologies limitations: one week latency on optimistic roll ups, and hours latency and high cost on zk roll ups.

In particular, I think “scaled out” of ethereum roll ups means they need more than one shard for their own application, which means their own application would pay this intershard latency, making it their problem not just their customers problem.

There are no “concensus” advantages to optimitic roll ups or zk roll up, although the zk roll ups improve the threat model slightly. Mina’s self recursive proofs have a much better “concensus” threat model, but afaik only Mina does this, not ETH zk roll ups.

All serious secience & engeneering disciplines have their technical langauge which exists for good reasons. We do have bad reasons galore in the social sciences and humanities, especially economics, because those disiplines judge work primarily by social desirability of outputs, but that’s another topic.

We’ll always have people spouting pseudo-science, like faith healers talking about energy, or people talking about fantasies of proofs without assumptions, seemingly what you’re doing, although your langauge seems intentionally vague so who knows. All that is noise, not worth anyone’s time.

Anyways…

“Assumption” always means “reasonable not-redily-falsifiable assumption” in the cryptographic sense, and protocol design sense. It’s not exactly “non-falsifiable” the some philosophical senses, but instead captures the relevant notion of “beliefs about the world” better.

All non-trivial secuirty proofs have some structure: reasonable not-redily-falsifiable assumptions yield some probabalistic statement. That’s what everyone means when they say something is “provably secure”. These security arguments are valuble not because they tell you absolute probabalistic truth, but because they isolates from where failures occur. They are damage control and nothing better is possible.

As [not-redily-falsifiable] assumption, we have byzantine ones like 2/3rd honest, partial synchrony, full synchrony for assignments, and implicitly that governance leaves this protocol as is, as well as cryptographic assumptions like ROM and CDH, but eventually AGM plus pairing-based ones.

We do prove adversaries expect to lose DOTs when carrying out soundness attacks, under those assumptions. We do employ other hypotheses like epsilon = 20001^{-1} here but only redily-falsifiable ones, not assumptions in the usual sense.

All this blockchain world exists because legacy finance rest larger assumptions. In polkadot, we simply make more agressive use of almost the same not-redily-falsifiable assumptions required by all blockchains, excpet Mina who slightly improves thinks, but at great cost.

In principle, if you wanted to secure an asset worth 100 B using any blockchain with a market cap of 10 B, then you should implement your own slashing and either propose a merger, in the sense that you ask that blockchain’s governance to require validator stake your asset too, or else make your asset holders hold just enough DOT that the assumption holds. In reality, you have so much money there that you could just “acqui-hire” enough people to deploy your own code fork, which skips some politics. You must implement your own slashing in all cases, which maybe poltically impossible or not.

2 Likes

I suspect that @Barakion is asking about what’s the moat of JAM, what will it enable in a multi-chain world that won’t be possible without it?

And in the near term what enables Plaza versus Monad?


For context some resources (discounted the gray paper):
https://blog.kianenigma.com/posts/tech/demystifying-jam/

Our temporary insight is that JAM brings a generalised multicore (not shards since the processing units and the data processed are specialized in its enclosure) Byzantine fault-tolerant cryptoeconomic Sybil resistant model of data processing with quasi synchronous composability across cores. It sounds like an enabler of unknown applications, although maybe such generalisation is not needed by any imaginable computational task that cannot be served better with other models. Only time can tell.

2 Likes

or unstoppable systems or present as an established “fact” something that is not.

In general I’d agree with the psuedo-science observation. In this context it generally goes under the rubric of crypto-obscurantism - a specialization of techno-obscurantism.

Please quote the part you believe means I expect proofs without assumptions, or that you find vague, and I’ll attempt to clarify.