Keyless accounts (AIP-61)

Cautionary Note:
The ideas canvased here reflect a point of view, and end state, that has authoritatively been described as “fundamentally mistaken”

Is there any initiative in the Substrate ecosystem that is similar to Aptos keyless accounts proposal (AIP-61), further high level details in their keyless dev documentation.

One point I’d be interested in hearing an opinion on is the use case of seemlessly (automagically) creating an account between relay chains - here the initial chain plays the role of google. This suggests something like a on-chain OIDC provider.

I do wonder if this stands on the toes of XCM?

My primary interest is in the feasibility of this approach to account creation (setting aside other issues of value translation) in allowing users to move between relay chains. If feasible it removes one technical objection/difficulty (creating a multitude of accounts) in facilitating further decentralization - moving another step closer to ending the current centralized-decentralized ecosystems.

As AIP-61 yields a SNARK, you could deploy it in a polkadot parachain.

As GitHub - TheFrozenFire/snark-jwt-verify: Verify JWTs using SNARK circuits contains circom circuts, use circom-compat/Cargo.toml at 170b10fc9ed182b5f72ecf379033dda023d0bf07 · arkworks-rs/circom-compat · GitHub and GitHub - paritytech/arkworks-extensions: Library to integrate arkworks-rs/algebra into Substrate

There exist several questionable claims within the AIP-61 document, like:

[accounts] no longer contains any identifying user information (beyond the identity of the OIDC provider …)

A priori, you’d need a VRF somewhere for this, either some threshold thing, or else you trust google for it somehow, but RSA-FDH is a VRF by some definitions, but RSA-PSS is a not a VRF.

It’s possible they make the user record some secret entropy for the account id, but does not provide access like a key does. If so, this annoys users, leaks the privacy to the user’s gmail, and looks incompatible with non-email OpenID providers. There maybe value in paying to a non-existant OpenID account, which forbids this entropy.

I’ve no looked closely, but I suspect one could optimize their protocol considerably. You could even verify the RSA signatures directly on-chain, with appropriate blinding tricks for privacy.

You could presumably send coins between parachains with one end being some AIP-61 account, likely over bridfges too. I donno if the XCM schema requires modification here, but likely no.

Afaik there is no relationship between AIP-61 and multiple relat chains, just because you can create accounts in the same way on both says nothing, like that just means its not worse than ledger in that respect.

There exist several notions of “seperate” relay chain:

First, independent validator sets like Kusama vs Polkadot could be bridged via BEEFY, roughly like any two flexible & collaborative proof-of-stake blockchains. In this case, users must assume 2/3 honest on both chains.

In particular, Cosmos assumes 2/3rd honest in most or every zone, this becomes unrealistic eventually. Attackers could spin up a chain/zone, behave honestly initially, like by airdropping non-prefered staking tokens in Cosmos case, but then later take over using prefered staking tokens, and launch attacks against other zones.

If an ecosystem like Cosmos becomes successful, or even if chains bridge one another too easily, then eventually they should suffer attacks like this, but obviously there are many easier attacks in the blockchain world. This is basically the failure mode sharded schemes like polkadot, and roll ups on ETH, exist to prevent. Too many bridges should eventually fall.

Second, you could require all polkadot validators run one node on each relay chain, so then assuming 2/3rd honest yields that all relay chains are 2/3rd honest, and they can all trust one another, but they still require BEEFY for communication. It’s possible vlaidators would not want to run more nodes of course.

Third, you could adopt the OmniLedger approach: Assume 80ish % honest across the whole polkadot validator set, ellect 1000 * k validators, and make one relay chain supply good threshold randomness, with which you randomly reassign the 1000 * k validators to k relay chains each epoch.

All relay chains are 2/3rd honest, by an argument using concentration inequalities (note OmniLedger claimed smaller than 1000 here, but our works says they’re wrong). Again it follows they can all trust one another, but they still require BEEFY for communication.

Also, you cannot validate a relay chain using a parachain slot, so nested relay chains make no sense, and no messages without finality via BEEFY or similar either.

1 Like

Yes, that VRF/VUF is a central issue in the discussion on the ZK podcast. If I understand correctly they do have a way of generating randomness - I’m still digesting the ZK podcast episode, so can’t recall immediately where. This is done via a Verifiable Unpredictable Function, which apparently are common, this in turn gives them a VRF.

There was some tentative skepticism expressed about VDFs, and the point of doubt was whether that skepticism extended to RSA-VDFs. But I’m at the limit of my knowledge so have likely mistaken your point. Or misheard/recalled - I’ll update as required.

I’d agree.
As I indicated, setting aside issues related to value: I think the ability to generate accounts as they describe will become important.
I also should have said setting aside economic security considerations (I believe this is where game-theory incantations start) - of course those considerations are the whole point of everything.

Thanks for sharing the additional insights on relay chains. I obviously will need to study BEEFY more.

It’s unreasonable to anticipate an implementation of what I described without a use-case in hand, so it’ll be interesting to see if additional use cases emerge.

Other than the AIP use case of on-boarding Web2 users to Web3.

1 Like

As a wise man remarked: “There are no solutions, only trade offs”
Mimicking wisdom, I would add that if you are intrigued with this idea the following AIPs shed light on some known trade offs:

1 Like

VRF vs VUF is academic here: VUF typically means the oujtput has some algebraic structure, like say out = sk Hash2Curve(in). You obtain a VRF by breaking this strucutre using a hash function, like say blake2b(out,in).

Who does the VUF/VRF?

It works if they’ve a threshold VRF run by their validators like dFinity, but either this works on-chain, or else require some other mechanism by which all validators do something. That’s not cheap.

At minimum, any proof-of-stake blockchain pays for block execution time and concensus, which consists of 2-3 rounds of voting. A user must pay 1/2 of the concensus voting round cost merely to access their account id, but they’ve no funds without their account!

You coupd’ve some smaller VUF/VRF committe, but then privacy works in another threat model.

It works if they’ve some deterministic secret randomness from Google. RSA-FDH would be a VUF. An Ed25519 is not a VUF/VRF but Google would not change the standard, so this ugly hack works too. OIDC uses RSA-PSS here, not Ed25519 or RSA-FDH. If I remember correctly, RSA-PSS uses system randomness in the padding, unless someone derandomized it like Ed25519.

Some TEE could do the VUF/VRF, but that’s fairly weak privacy.

All of these still leak user identities to whoever runs the VUF/VRF. I’d expect their prover service learns user identities too.

It’s likely the privacy problem is solved badly, so then one option is provide AIP-61 but ask users not to use it both for their own privacy, and not to create data polution.


GNU Anastasis provides another avenue here: Ask users to have their own keys, but provide a secure backup facility, which could include OIDC & other identity mechanisms.

I think GNU Anastasis should trust the anastasis providers less than AIP-61 trusts the VRF provider.

1 Like

Yes, you’re right it does.

While my primary interest is re a token design that allows for the necessary participants, and consensus designs, I have found my self often thinking I could be persuaded that users are best served with a web3 router replacing their current Web2 router, which puts anything critical in their control. Obvious drawbacks to this abound (cost, availability etc.), but on the basis of the 80/20 rule probably not worse than the status quo.

While the Web3 router thought first occurred while looking into Mina, it recurred with Filecoin, and again with these AIPs. There are a couple of projects I can’t immediately recall that may fit the bill, so I wonder if this isn’t where the sensitive things you identify belong.

I don’t, currently, believe a consumer token would pose an obstacle to any of these approaches.

Interesting. I often find myself thinking of the $5 attack when reading many security protocols. In the BC context I’ve often thought giving the customer say 5 or 7 credit card sized pieces of plastic printed with recovery instructions and instructions on how to print a sticker with a QR code for a 3/5 or 3/7 recovery would be fine.

I bet the value of funds stolen from credit card distribution to ‘normies’ is absolutely or proportionally a small fraction of the funds the most tech savvy ‘degen’ users have lost by messing up some chains key management requirements, e.g. losing or forgetting a key that is impossible to remember when you infrequently use it.

A couple of times I’ve read some threat model and wondered why they don’t realize the biggest threat their users face is the chain’s own key design/usage requirements.
But then this isn’t really in my wheel house.

Is the following understanding correct. Bearing in mind the AIP-61 targets google, etc., but the idea here is to have chain-to-chain account creation, so there is likely some additional degrees of freedom available (nullifiers?).

On Substrate relay chains there is no guarantees around account privacy on the relay chain, incl Polkadot.
The AIP-61 canvases a way to use OIDC to setup an account without keys, this too has no privacy guarantees.

When AIP-61 turns to trying to introduce privacy there are wrinkles (“the privacy problem is solved badly”).

So the question for a Substrate chain would be which non-private account is to be preferred, the classically generated account or the OIDC generated account. The preference would come down to the trade offs the OIDC introduces.

Or is there some aspect I’m missing that makes the non-OIDC account strictly