Keyless accounts (AIP-61)

Do you envision, the enabled functionality for a dapp/parachain being as the OIDC provider, OIDC consumer, or both?

I’d suggest both as required (see below)

Again to clarify, by bridges you mean BEEFY, or does this depend on SPREE, or some such forthcoming functionality.

Hence, for the purposes of this exercise, Aptos or some such chain, would need to…?
Implement a BEEFY client? Relay? something else?

One critical difference in this use case (cf google, etc) is: What happens if the OIDC provider chain fails?
Essentially there would need to be some protocol or mechanism for the user to gain access to the surviving chain account. The failing chain obviously being compromised.

Another element peculiar to this use case is the need for each chain to track the Last Respectable Block of its counterpart. SN/TARK may suffice here with or without ZK elements. But this is related to providing life rafts, and right now just agreeing to setting up escape hatches would be a categorical improvement:

  • BCs that provide escape hatches vs. BCs that do not.

We should make this happen, thanks for brining this up @taqtiqa-mark and thanks for all the clarifications @alinush

Aside from NEAR, there exists another similar protocol by another team, likely a third independent code base. It uses a TEE instead of the pepper OPRF, not sure what other differences exist. The OPRF should give much stronger “soundness” than the TEE, not sure if the TEE has some real advnatages. Ain’t clear if their work was even open sourced though.

I’ve not reviewed all the differences, but among the three the Aptos one looks superficially strongest, mostly thanks to the pepper OPRF. I’ll have another researcher look over them too.

IIUC, this makes the security situation for every user far worse:

Now their ability to survive the RC failure is conditional on each dapp or “chain” choosing to offer them an escape hatch (and later, life rafts). I’ve outlined why relay chains, para chains and dapps are inclined to try coral users:

I’m not saying this does not have costly tradeoffs.

I am making sure those tradeoffs are explicitly acknowledged.

Relay chain accounts are purely harmful, and bring zero benefits.

It’s unclear what you mean by RC failure: liveness? safety? soundness? everyone getting bored and going home? political conflict brings dwn the internet permenently?

An optimistic roll up must tolerate soundnesss failtures, but byzantine roll ups (polkadot) make soundness failures anywhere impossible anywhere in the system (ditto zk roll ups). In principle soundness failures might completely destroy every account in the system.

A safety failure can only occur on the relay chain. It must be solved by humans choosing the valid fork, after which all parachains must obey this choice.

As a rule, a liveness failure should just recover eventually. In principle, all parachain nodes could simply abandon their work, and delete their databases, which makes those accounts in accessible. This failure mode winds up worse in advanced smart contract patterns too, so nothing really unique to polkadot’s sharding here.

We do envision providing erasure coded state backup so that parachains could be restored even if every parachain node disapears, but if an erasure coded state backup costs n then each block on that parachain costs roughly num_tx * log n, so if nun_tx were small and the state size were large, then this could look quite expensive. We’ll do this for system parachains like assethub though, so problem solved whenver this happens.

All together, there are zero benefits to relay chain accounts over parachain accounts. Yes, some dapps can shut down, but at least polakdot making them be chains simplifies them having “decentralized liveness”, and we’ll do erasure coded state backup eventually.

As for relay chain accounts being harmful…

If a single verifier has cost x then a relay chain should cost roughly 1000x, and a parachain should cost 35 x from the perspective of relay chain work. In total server work a parachain costs (35 + c)x where c is the parachain node count, a liveness paramater, but ignore c from the perspective of relay chain work.

If you add one parachain worth of work to the relay chain then you waste 1000/35 = 28.6 cores worth of CPU time for the whole system (maybe more). If we envision polkadot having 300 cores then you’ve wasted 10% of the entire value of the system. Absolutely unacceptable.

Apologies for the very delayed response…

Is GitHub - near/pagoda-relayer-rs: Rust Reference Implementation of Relayer for NEP-366 Meta Transactions 2 based upon your code too?

Hm, that seems unrelated to me: it seems like a repo for generating TXNs whose fees are paid by others.

then I suppose the your pepper service would not require a login per application, but simply one login when instantiating some new frontend, right?

So, the way we derive account addresses is application-specific (see here. At a high-level:

addr = H(
   "https://accounts.google.com", 
   "sub", "google-user-id-123456", 
   "application-id.xyz"; 
   r)

We let the pepper r be:

r = VRF_sk(
   "https://accounts.google.com",
   "sub", "google-user-id-123456", 
   "application-id.xyz")

To make sure that only that user for that application is able to obtain that pepper, the pepper service asks for a JWT over the right sub and aud (i.e., over "google-user-id-123456", "application-id.xyz").

So, an application that the user signs in with Google will use (say) our SDK contact the pepper service in the background, authenticate itself using the JWT and obtain the pepper for that user’s application-specific address.

Another application will do the same.

Other schemes may be possible; e.g., maybe you could say “I’ll use the same pepper for all that user’s addresses, and ignore the application”.

i.e., no more app ID in the VRF evaluation

r = VRF_\mathsf{sk}(
   "https://accounts.google.com", 
   "sub", "google-user-id-123456")

But then any malicious application the user signs into now can fetch the user’s pepper and track her activity on-chain.

Actually users know their OIDC provider, so no reason this appears on-chain, right?

Yes, they know it. The only reason to leave it public is because it makes the OIDC signature validation logic in the zkSNARK circuit easier:

  1. The validators see what OIDC provider this address has
  2. They previously fetched the JWK (PK) for it in the background via a special-purpose consensus algorithm (see here)
  3. They can now input this JWK (PK) as one of the zkSNARK’s public inputs for the signature verification.

It is possible to hide the OIDC provider: the zkSNARK would take (1) a digest of an authenticated dictionary mapping each provider’s iss to their current JWK (PK), maintained by the validators and (2) a look up proof w.r.t. this digest that reveals the PK corresponding to the iss in the JWT. It gets hairy though.

Also, the chain & pepper server care little who provides the OIDC, so the only plaintext on-chain should be the pepper server, no? Along with balances or whatever.

Not sure I follow: How can a “pepper server” be a “plaintext”?

Will be very neat to have a way to avoid relying on public keys on chain

I don’t think this can work without mapping each OIDC provider (i.e., the iss) to its current JWKs (PKs): the chain needs ground truth to verify the OIDC signatures against.

As mentioned above, our approach is to implement a special JWK consensus primitive on top of our validators. An oracle approach may work for others, but not for us.

So, if the complexity of the circuit makes it unfeasible to run your own prover it hinders even more the privacy of the solution.

You can do client side-proving in Rust in around 20-30 seconds probably. Depending on the application, this may or may not be okay.

It’s true however the (a) pepper server should not necessarily be trusted for account survival

That would be nice. Possible approaches:

  1. use pepper = 0 (no privacy)
  2. ask users to remember their pepper (bad UX)
  3. decentralize the pepper service on the validators
  4. decentralize the pepper service on some other infra (e.g., League of Entropy-like consortium?)

If you decentralize the provers, colluding on that matter will be very unlikely.

Yes, MPC proving for Groth16 is actually possible rather efficiently. See these works:

  • OB21e, Experimenting with Collaborative zk-SNARKs: Zero-Knowledge Proofs for Distributed Secrets, 2021, Alex Ozdemir and Dan Boneh
  • SVV15e, Trinocchio: Privacy-Friendly Outsourcing by Distributed Verifiable Computation, 2015, Berry Schoenmakers and Meilof Veeningen and Niels de Vreede
  • CLMZ23, EOS: Efficient Private Delegation of zkSNARK Provers, 2023, Chiesa, Alessandro and Lehmkuhl, Ryan and Mishra, Pratyush and Zhang, Yinuo
  • GGJ+23e, $\mathsf{zkSaaS}$: Zero-Knowledge SNARKs as a Service, 2023, Sanjam Garg and Aarushi Goel and Abhishek Jain and Guru-Vamsi Policharla and Sruthi Sekar

@burdges and @alinush can you confirm the context and scope of what you have in mind here? Specifically, my suggestion was in the context of creating keyless accounts between Relay Chains (L1’s outside of Substrate).

I am not sure I understand enough about Polkadot’s relay chains / Substrate to opine. But if you give me a short primer, I can try…

I did not suggest, and do not support, using this to onboard new users from outside the current ecosystems. @alinush I’ll DM you for the reasons for this. But you can see a use case outlined here:

Sure, please do. Or even post here? But what do you mean by “current ecosystems”? I am currently parsing this as “It is risky to set up OIDC-based (keyless) Polkadot blockchain accounts for our users.”

I’ve not reviewed all the differences, but among the three the Aptos one looks superficially strongest, mostly thanks to the pepper OPRF. I’ll have another researcher look over them too.

I think our work is the only one that is (1) privacy-preserving and (2) fully-open source (soon; prover service incoming).

Sure. I’m just suggesting prioritizing the Aptos-Etherium, Aptos-Polkadot, etc. use cases over the Aptos-Google, Polkadot-Google, etc. use cases.

Hopefully the above clarifies this? “current ecosystems” = Aptos, Polkadot, Ethereum, etc.

No, there is no suggestion it is risky to set up OIDC-based (keyless) Polkadot blockchain accounts for Aptos users.
I am suggesting it is prudent to prioritize setting up Polkadot accounts for Aptos users, and vice versa. Rather than prioritize the Aptos-Google, etc. use cases.

It would be unusual for that to be the case. But I suppose not impossible. I believe there are benefits. Of course if you don’t value those benefits, then yes, they may appear purely harmful. I don’t mean to suggest that any of the following are straight forward or currently available.

No. Far less melodramatic:
Say, anything that results in a DOT value (mkt cap) of zero or the neighbourhood of zero. It could be as trivial as someone inventing a better chain, offering the same guarantees as Polkadot at a fraction of the cost, offering the same costs as Polkadot with superior functionality, etc. etc. You could even think of this as the blockchain counterpart to the $5 attack in traditional crypto analysis - something that is assumed away because it is too inconvenient.

Again, to my mind, that is by assumption. That there are benefits, but they may be outweighed by the costs, is a categorically different position.

I believe a more reasonable statement would be that relay chain accounts have costs. Whether those costs outweigh the benefits depends on how you value those benefits. As I understand things there are two benefits to relay chain accounts:

  1. they open the door to developing escape hatches and life rafts for the event a relay chain fails for any reason: Foreseeable and unforeseeable.
  2. they make it far more difficult for cartels/oligopolies/monopolies to emerge, by one relay chain (and/or it L2’s/parachains) capturing any network effects. The network effects of course still will exist, so the benefits are there, but a rent seeking model would be far more difficult. But as I warned at the outset of this post my perspective is considered mistaken at the highest levels of the project.

I agree if you assume there are no benefits, or they are insufficiently valuable to you, then the 10% cost makes little sense.

Others might consider 10% a price worth paying if we all we do is avoid cartels/oligopolies/monopolies forming. The possibility of your wealth surviving the failure of another L1 or relay chain might also be attractive at 10%. If both benefits are available there may be a considerable clientele for such networks?

It maybe true that there are more efficient abstractions through which to implement such universal functionality? So I should point out I did pose the question about whether all this is best handled in BEEFY or some such additional abstraction:

Again, there are no benefits of relay chain accounts over parachain accounts, because they have the same threat model for safety and soundness. That’s why the system exists.

If DOT valuation collapsed then relay chain accounts have no value anyways, both by definition, and because our assumptions break. I suppose non-system parachains could hypothetically fork off into independent chains, making this scenario a (very) theoretical win for parachain accounts. None of this matter.

I don’t believe I’ve suggested otherwise. My suggestion is aimed at something that is assumed away - as I pointed out.

You are correct, that does mean it is outside the threat model. That is not the same as being irrelevant in practice.

Isn’t the universe broader than DOT? Take say, APT. But lets take a DOT-centric perspective. Isn’t there some point in DOT-time where the APT chain could consider a DOT-block data reliable. And unreliable after that?

Again, I’m not suggesting these issues are trivial to resolve. As you say, the question that prompts them, has generally been assumed away.

I work from a different premise - this does matter.
Working from different premises naturally we land at different end points and make different trade-offs.

You’ve used pronouns “it” lots here, never giving even one concrete harm that’d befal a parachain account but not a relay chain account.

No. We never unfinalize anything, unlike say optimistic roll ups on ETH. If a relay chain block is finalized then every parachain block included in or before it becomes final too.

Also, bridges never need much historical data anyways, maybe an hour or two if automated, maybe a few days if humans must act on both sides of the bridge. We specifically optimize the merkle proofs in our bridges to be cheap for recent blocks, but expensive historically.


As a parachain liveness concern, we do not store parachain data forever in the availability store or elsewhere. As I said, we’ll add state erasure coding eventually but only for recent parachain state checkpoints, not history.

We’ll never store much relay chain history on validators either, not even erasure coded, so realistically there is no difference here. We require the relay chain and parachains provide their own archive nodes for whatever timeframe users expect historical data, but this never impacts system operation.

It’s largely bullshit magical thinking that blockchains provide some useful permenent records. At some point, the past is the past and nobody cares. The sooner the better, but being sooner requires one accout for why humans access historical data. Amusingly, a demurrage would save people accessing information for their taxes.

I don’t believe I ever suggested that. While I was clear, the conversation has moved on and the context isn’t always easy to track:

This statement of mine was inartful:

That is and was my understanding. It would have been better for me to say:

Given time is measured in blocks. Isn’t there some point in DOT-time where the APT chain could consider DOT-data reliable. And unreliable after that? For example when building an APT block it’s protocol may consider the last-respectable-block for Polkadot to be number: 123.

You’re remaining remarks are in line with how I understand things to work. Thanks for taking the time to set them down, it is useful to validate understanding from time to time.
As I remarked a couple of times, I do not suggest offering failover will be trivial. There are likely many devilish details - but it seems reasonable for the chain doing the tracking to store data it wants/needs.

We can predict such default L0/L1 functionality will be attractive to new entrants that will have the incentives to offer this (they have more to gain than lose), and incumbents will resist (they have more to lose than gain) - as you point out Dotsama stares at a 10% cost for functionality that makes it easier for its users to setup accounts on a competitor network. To my mind this actually makes Dotsama less risky to commit to - the risk of lock-in is lower - than if OIDC (keyless) account feature is not present.

This suggests building in OIDC (keyless) accounts as part of the protocol (which element of the protocol?) - making it more difficult to build moats around users.

Those words like “security situation” and “reliable” should mean one of the more formal security properties.

A block is either finalized or not. If finalized then everything previous is finalized. State roots are committed to in blocks. Any bridge just trusts finality.

As an aside, blockchains have little or no “post compromise security” or “self healing” properties of the sort messaging apps provide. In bitcoin, they only really claim social concensus prevented past attacks, meaning they ignore any already in their chain, but “interesting” blockchains would imposes additional assumptions too, like all past validator sets being honest, all trusted setups being honest, etc. Anyways, messaging apps do this in part to make up for their lack of authentication, but authentication is exactly what blockchains do.

Agreed. But aren’t we outside the (current) formal model? (Not that it cannot be extended.)
Here what “reliable” means is whatever the fail-over/fallback chain decides is fit for its purpose. The upstream chain’s notion of “Finalized”, in this context, may not be enough.

That was my understanding, and I believe the docs I read were clear on this point. However, IIUC the bridge setup requires a relay party. Perhaps the downstream chain provides a relay that imposes the downstream requirements? Anyway, this is getting into questions of which architecture is optimal wrt the 10% cost you pointed to.

Agreed. That has been my working assumption.

While here we have been concentrating on the safety motivation/use case. I believe the more common and valuable use case is the “low-friction” provision of cross RC accounts as a way of fostering competition. Here building this into the protocol definition becomes controversial - is only justified if you take a particular opinionated position on the costs vs. benefits.

Nothing you’ve said suggests this.

That makes no sense.

We have a threat model where we argue security and debug the software. We’re lucky if the code even works right there. We do outside things too which we believe maintain that threat model, like staking and slashing mechanics, but we accept these have a human compont, keep them simple, and suffer when debugging them.

There is no point in even having software after this point, meaning software with no relationship to the threat model, beacuse one cannot debug limitless fallback scenarios. It’s just stupid busy work.

Instead, the real “fallback” becomes human concensus.

This is all nonsense & magical thinking.

Finalized means reversions cannot happen within the protocol, including bridges, period. Human concensus can make other choices, but that’s outside the protocol.

The simple facts are:

If a parachain develops problems, then our shared security simplifies salvaging something, wheras bridges have already failed in say Cosmos. After we have state encoding, then humans could salavage soething even when say all collators disappear.

If otoh the relay chain goes away then: Again parachains could be salvaged, even as independent chains, but relay chain tooling no longer helps. Acounts on the relay chain have zero advantages here, likely some disadvantages in practice.

It’s all just data once humans turn off the protocol and descide what to do next.

I thought the current model assumes DOT cannot be zero, or in “the neighbourhood of zero”:

While the following might be true:

Is it really more magical than assuming DOT>0 ?

Can you point to the proof where this holds when DOT=0?

Note that, of course, such a proof has to be practical. If such a proof existed and, further more, it was feasible, one would expect to see a simulation as evidence of existence and feasibility of any claimed DOT dynamics.

I don’t believe I have seen a simulation of DOT dynamics.

Is there such a simulation engine inside the W3F that demonstrates DOT>0 or that the security guarantees hold when DOT=0?

In addition:

Are staking and slashing effective when total DOT=0, or there about that value?

This is true even when it costs a negligible sum to acquire x% of DOT?

I believe there is/are intermediate state(s) where the fate of the protocol isn’t clear to everyone, and it is this transition phase that can be value left to protect.

Against DOT valuation collapse, you could only strengthen the relay chain by finding ways to trust the validator operators:

  • KYC operators - 1KV does this, but making everyone do this sounds contentious.
  • Make oeprators sign contracts - 1KV does not but should do this. Some centralized professor coins like dfinity do this.
  • Hold meetups where you discover if operators are distinct people - Tresury should do meetups anyways. 1KV should ask operators to go sometimes.
  • Disallow whales or running multiple nodes - Very contentious and hard to implement

Anything like this strneghens the parachain accounts similarly though, so again there is no place within our threat model where relay chain accounts are more secure than system parachain accounts.

This is not true. If attackers break soundness, which they do in your scenario, then the relay chain enters a garbage-in-garbage-out situation, so humans must manually fix everything.

Ask if say Mina or one of the ETH zk roll ups has any security when the attacker has a quantum computer. Nope. Attacker breaks the trusted setup and then does whatever they like.


Itself DOT is only a heuristic security measure, not a formal one. And only its subjective value to validator operators and nominators matters, not some rate on some exchange. The real thing that’ll make the DOT heuristic work is having enough of a software ecosystem that people want to buy poarachain blocks, but even here the heuristic is fragile because capitalism sucks. All blockchains have either this same problem, or worse problems like bitcoin does.

The real scenario you’re discussing is: 1/3rd of validators collude.

Above that 1/3 somewhat, those validators would break “soundness”, meaning they finalize one invalid parachain block. At the point they break soundness then they print or steal nearly infinite DOTs, enough to do a relay chain code upgrade, and then everything is theirs. Attackers doing a code upgrade takes time, and so do new validator ellections, but attackers already hold control after mere seconds, so the relay chain cannot be saved by code here. Instead, you need a fork or regenesis by honest humans, but based upon old state, not the corrupted state.

As for prices, we’d have roughly DOT=0 by this point, not merely on exchanges, but even relative to some parachain tokens. This is the most likely scenario by which DOT=0 happens, but also DOT=0 means many sellers but no buyers, making this equivalent to your scenario.


Interestingly, parachains would be more secure in your scenario, provided they enforced some additional concensus rules, like bridge parachains sometimes do, because then the relay chain nodes cannot make the parachain do something internally. Amusingly, even the parachain exploited during the attack could remain secure. lol

This is saying polkadot can provide security that’s the strongest of the parachain and relay chain. Relay chain accounts cannot benefit from this, by definition. Bridges are generally the weakest of the two.

The magical thinking is believing that because the relay chain controls finality that relay chain accounts are more secure than system parachain accounts. It’s the same or the opposite, depending upon the parachain.

Appreciate the clarification. We appear to be working from different premises and weighing tradeoffs differently. Which, as I’ve said is categorically distinct from claiming there is no problem under any premise/assumption.

I’d agree the RC can be strengthened in the ways you suggest. I disagree they are they “only” way - but I don’t feel you were attempting to be definitive on that point.
I should remark for a wider audience; a better way would be not to use a speculative token to secure a blockchain. The difficulty with that observation is obvious.

I’d only make these clarifications:

  1. RC account security vs. Parachain account security:

I believe I’ve been clear on the value relay chains offer, and it is a practical one, not a claim that RC account is more secure than a PC account. No magical thinking here:

  1. Breaking Soundness

Here my point is more subtle and would be expected, in technical terms, to show up as solving a free boundary problem, so “break down” becomes a region that occurs some distance from DOT=0 (“or the heighbourhood of zero”): My scenario is when the “fate of the protocol isn’t clear to everyone” so one cannot categorically state: We know an attacker has broken soundness.

Fascinating concept! Simplifying account creation between relay chains could boost decentralization efforts. Looking forward to seeing how this unfolds!

1 Like

It’s the opposite here too: Parachain nodes should’ve avilable complexity, bandwidth, CPU time, etc. for users, memepools, etc. We’ll optimize relay chain validators for a fairly tight workflow, so validators would not no longer reserve anything for dealing with users. Among other benefits, this makes the relay chain simpler & easier to maintain. The less it does the better.

Amusingly, one coiuld even hide relay chain nodes behind an anonymity system, making inaccessible without Tor or whatever.

It’s the opposite, we must know that attackers have not broken soundness, otherwise humans must intervene.

At some point, ZCash discovered they’d published too much information from their trusted setup, aka soundness broken, so they upgraded the protocol with a turnstyle from the old protocol. We’d have exactly the same problem but polkadot is too flexible for an effective turnstyle.


It’s like you’re conjecturing wild advantages to user facing code living in an OS task scheduler. No! Just no! That’s completely stupid for so many reasons, one of which is debugging the OS task scheduler properly. It’s uniquely stupid in our. case because of the cost.

There exist user tools like fuse or wireguard that integrate with kernels, but they’re seperate modules that run kernel tasks, intentionally not deeply integrated. We’ve system parachains for functionality that requires integration with, or similar upgrade authority to, the relay chain.

Again I’ll ask: do you have a proof that establishes these features are required?

If not, then these are choices based on various tradeoffs. Again, as I’ve repeatedly stated: I don’t dispute these are choices. I do dispute, and won’t accept until you point to a proof, that these features are absolute requirements.

In your turn of phrase: There are proofs and everything else is magical thinking.

Appreciate you clarifying your previous description.