Bringing True Randomness to Polkadot via Quantum Experiment

TL;DR: There is a realistic way to bring true randomness generated from a quantum experiment to the Polkadot ecosystem. I want to gain insights from the community if you think this is a valuable for parachains (and beyond).

Quantum Secure Randomness

In a recent discussion I had with NIST’s beacon team, the topic of ‘post quantum’ randomness beacons came up. Inevitably, randomness based on threshold BLS or VRFs or other non PQC schemes will be broken by quantum computing. They weren’t aware of any digital-signature based beacons (e.g. using post-quantum crypto schemes), however, they pointed me to a team at the University of Colorado who are working on a quantum random number generator project. https://random.colorado.edu/

I recently met with the team to learn more about what they’re working on. Here’s the physics-phd-required-to-understand version of it and a somewhat more digestible one. In basic terms, it’s an experiment using entangled photons whose output is true randomness that can be algorithmically verified. True randomness, as opposed to pseudorandomness output from existing randomness beacons, is truly unbiased and unpredictable, providing new kinds of security guarantees for protocols that require verifiable on-chain randomness.

DOT Integration

Bringing true randomness to the DOT ecosystem would not only be future-proof, but also allows for less biased/more fair protocols and consensus mechanisms to be developed. It could also showcase the technical superiority of Substrate in being able to bridge quantum computing to web3 systems, as well as opens the potential for further collaborations involving quantum technologies.

Potential Partnership

We (Ideal Labs) and the University of Colorado team are both eager to collaborate to further define the cost and scope of this work. Beyond the randomness they generate, we also found many commonalities with some of the future work we have planned (specifically the idea of an ‘entropy mesh’). The experiment is currently funded by NIST and the beacon is free to consume.

I’ve only just started to more deeply research how this could work, so any thoughts/feedback/comments/ideas are very appreciated.

4 Likes

Thank you for sharing such an interesting topic.

How does the verification of true randomness work? As far as I know, the NIST beacons are signed by themselves, so you trust them. What are the trust assumptions in this scheme?

There are only a limited number of devices capable of producing this kind of randomness, I think they have two currently, so there are some potential issues related to centralization (it’s a very early stage). For verification they mentioned that they have a script that can be run to verify the output of the experiment. The idea is that the script could be re-implemented in rust and made wasm compatible, so the output would be publicly verifiable. In that sense it would require similar trust assumptions as this drand pallet that we recently developed (though drand is markedly more resilient/robust for the time being).

Is the output somehow self-verifiable and does not require signatures from the sources? If that’s not the case, won’t some publicly verifiable inputs as sources of randomness be more secure (e.g. using GPS almanacs as inputs) than trusting signatures from super-specialized equipment outputs that are inherently non-deterministic?

Corrupting the state of the GPS will have disastrous consequences since it’s ingrained in the real world, and that’s not the case for the random generators.

Appears their papers do not really describe the protocol, likely you must read the literature they cite, both for the protocol and the underlying security assumptions.

As a rule, new hardware gets hard in diverse ways, including theoretical ones. Also, you want “post-quantum” randomness, not necessarily “really quantum”, so…

A hash-based commit & reveal is already post-quantum, but the problem is people might disappear and not reveal on time. It’s post-quantum to threshold share the committed value using Reed-Solomn too, which enables schemes beyond just “make everyone reveal on time”.

If I worried about post-quantum randomness, then I’d first ask the lattice based hash function people. It’s likely this would be a good topic for some PhD student doing lattice protocols?

Also…

We should make kusama concensus “as post-quantum as reasonable” for a few months, just as an experement and PR stunt, but this comes down mostly to integrating NIST approves signatures schemes, and seeing what they cost us in practice. After a few months, we’d let the community descide if they wanted to revert back to the more performance pre-quanatum protocol, or keep post-quantum and keep making PR noise about it.

Randomness is one part of that story, but likely we’d use commit & reveal for some VRFs, or maybe one-layer hash-based sigantures, which’re VRFs.

2 Likes

A bit off topic, excuses, but having @burdges in the thread I’ll take the opportunity to refer to this advancement in NITC schemes that looks very promising:

Seemingly it is not only potentially quantum-resistant but as well does not require a trusted setup.

Any thoughts about?

I definitely don’t have all the info on it, but as I understand it the output is publicly verifiable (certifiable in the literature) by running a Bell test n times, which is then run through a Trevisan extractor to generate the 512 byte output string of (uniformly distributed) random values. If we could re-implement the extractor in rust and somehow run it on-chain then we could potentially reduce trust assumption, but that doesn’t really remove any trust assumptions since it relies on specialized machinery that is hard to access and even harder to maintain/use.

Maybe if the technology were more advanced, requiring less specialized tools for generation + verification, then it could be a more viable tool (so… 10-15 years?). The idea of experimenting with NIST-approved post-quantum sigs in Kusama is really intriguing. I’ve seen various project/experiments with early implementations, like Algorand’s use of of falcon, or this one that claims to use LWE in some way https://www.abelian.info/en/home/.

I suppose with the lattice approach, LWE can be used to construct a PRF (example), so maybe that’s a good place to start looking?

1 Like

As a field, “quantum cryptography” seems incredibly divorced from reality. Afaik most work there ranges from “clueless about systems” to “academic grift”. In theory, yes quantum effects might help you trust your local randomness more, but in reality hardware PRNGs were always major targets for backdoors, so 10-15 years woud not be long enough for them to earn people’s trust. :wink:

Interestingly, NIST removed one hash from their LWE proposals that’ll make them vulneralbe, if system randomness were also compromised, meaning the NSA think hardware PRNGs shall remain a productive target for their backdoors. I would not drop an unmodified NIST KEM on top of someone’s fancy randomness scheme.

Anyways…

We’ve discovered that polkadot could’ve 40% more parachain cores if we do core assignemnts using threshold randomness, mostly because this hides the adversaries assignments from themselves until after they commits to doing an attack. That’s great, but it’s not post-quantum using current techniques.

We could secret share the reveals for some commit & reveal scheme, and alttice based hash functions or even lattice zkps maybe imrpvoe this, but afaik anything here still misses the liveness thresholds obtainable via elliptic curve pairings. We’d have to address this anyways though if we ever want a “trustless” bitcoin bridge. In both cases, we’d have problems if too few validators participate honestly during the epoch before they become active.

Awful lot of other people like drand exploring this space, so we can kinda watch & see what others do.

1 Like