Keyless accounts (AIP-61)

Cautionary Note:
The ideas canvased here reflect a point of view, and end state, that has authoritatively been described as “fundamentally mistaken”

Is there any initiative in the Substrate ecosystem that is similar to Aptos keyless accounts proposal (AIP-61), further high level details in their keyless dev documentation.

One point I’d be interested in hearing an opinion on is the use case of seemlessly (automagically) creating an account between relay chains - here the initial chain plays the role of google. This suggests something like a on-chain OIDC provider.

I do wonder if this stands on the toes of XCM?

My primary interest is in the feasibility of this approach to account creation (setting aside other issues of value translation) in allowing users to move between relay chains. If feasible it removes one technical objection/difficulty (creating a multitude of accounts) in facilitating further decentralization - moving another step closer to ending the current centralized-decentralized ecosystems.

3 Likes

As AIP-61 yields a SNARK, you could deploy it in a polkadot parachain.

As GitHub - TheFrozenFire/snark-jwt-verify: Verify JWTs using SNARK circuits contains circom circuts, use circom-compat/Cargo.toml at 170b10fc9ed182b5f72ecf379033dda023d0bf07 · arkworks-rs/circom-compat · GitHub and GitHub - paritytech/arkworks-extensions: Library to integrate arkworks-rs/algebra into Substrate

There exist several questionable claims within the AIP-61 document, like:

[accounts] no longer contains any identifying user information (beyond the identity of the OIDC provider …)

A priori, you’d need a VRF somewhere for this, either some threshold thing, or else you trust google for it somehow, but RSA-FDH is a VRF by some definitions, but RSA-PSS is a not a VRF.

It’s possible they make the user record some secret entropy for the account id, but does not provide access like a key does. If so, this annoys users, leaks the privacy to the user’s gmail, and looks incompatible with non-email OpenID providers. There maybe value in paying to a non-existant OpenID account, which forbids this entropy.

I’ve no looked closely, but I suspect one could optimize their protocol considerably. You could even verify the RSA signatures directly on-chain, with appropriate blinding tricks for privacy.

You could presumably send coins between parachains with one end being some AIP-61 account, likely over bridfges too. I donno if the XCM schema requires modification here, but likely no.

Afaik there is no relationship between AIP-61 and multiple relat chains, just because you can create accounts in the same way on both says nothing, like that just means its not worse than ledger in that respect.


There exist several notions of “seperate” relay chain:

First, independent validator sets like Kusama vs Polkadot could be bridged via BEEFY, roughly like any two flexible & collaborative proof-of-stake blockchains. In this case, users must assume 2/3 honest on both chains.

In particular, Cosmos assumes 2/3rd honest in most or every zone, this becomes unrealistic eventually. Attackers could spin up a chain/zone, behave honestly initially, like by airdropping non-prefered staking tokens in Cosmos case, but then later take over using prefered staking tokens, and launch attacks against other zones.

If an ecosystem like Cosmos becomes successful, or even if chains bridge one another too easily, then eventually they should suffer attacks like this, but obviously there are many easier attacks in the blockchain world. This is basically the failure mode sharded schemes like polkadot, and roll ups on ETH, exist to prevent. Too many bridges should eventually fall.

Second, you could require all polkadot validators run one node on each relay chain, so then assuming 2/3rd honest yields that all relay chains are 2/3rd honest, and they can all trust one another, but they still require BEEFY for communication. It’s possible vlaidators would not want to run more nodes of course.

Third, you could adopt the OmniLedger approach: Assume 80ish % honest across the whole polkadot validator set, ellect 1000 * k validators, and make one relay chain supply good threshold randomness, with which you randomly reassign the 1000 * k validators to k relay chains each epoch.

All relay chains are 2/3rd honest, by an argument using concentration inequalities (note OmniLedger claimed smaller than 1000 here, but our works says they’re wrong). Again it follows they can all trust one another, but they still require BEEFY for communication.

Also, you cannot validate a relay chain using a parachain slot, so nested relay chains make no sense, and no messages without finality via BEEFY or similar either.

1 Like

Yes, that VRF/VUF is a central issue in the discussion on the ZK podcast. If I understand correctly they do have a way of generating randomness - I’m still digesting the ZK podcast episode, so can’t recall immediately where. This is done via a Verifiable Unpredictable Function, which apparently are common, this in turn gives them a VRF.

There was some tentative skepticism expressed about VDFs, and the point of doubt was whether that skepticism extended to RSA-VDFs. But I’m at the limit of my knowledge so have likely mistaken your point. Or misheard/recalled - I’ll update as required.

I’d agree.
As I indicated, setting aside issues related to value: I think the ability to generate accounts as they describe will become important.
I also should have said setting aside economic security considerations (I believe this is where game-theory incantations start) - of course those considerations are the whole point of everything.

Thanks for sharing the additional insights on relay chains. I obviously will need to study BEEFY more.

It’s unreasonable to anticipate an implementation of what I described without a use-case in hand, so it’ll be interesting to see if additional use cases emerge.

Other than the AIP use case of on-boarding Web2 users to Web3.

1 Like

As a wise man remarked: “There are no solutions, only trade offs”
Mimicking wisdom, I would add that if you are intrigued with this idea the following AIPs shed light on some known trade offs:

1 Like

VRF vs VUF is academic here: VUF typically means the oujtput has some algebraic structure, like say out = sk Hash2Curve(in). You obtain a VRF by breaking this strucutre using a hash function, like say blake2b(out,in).

Who does the VUF/VRF?

It works if they’ve a threshold VRF run by their validators like dFinity, but either this works on-chain, or else require some other mechanism by which all validators do something. That’s not cheap.

At minimum, any proof-of-stake blockchain pays for block execution time and concensus, which consists of 2-3 rounds of voting. A user must pay 1/2 of the concensus voting round cost merely to access their account id, but they’ve no funds without their account!

You coupd’ve some smaller VUF/VRF committe, but then privacy works in another threat model.

It works if they’ve some deterministic secret randomness from Google. RSA-FDH would be a VUF. An Ed25519 is not a VUF/VRF but Google would not change the standard, so this ugly hack works too. OIDC uses RSA-PSS here, not Ed25519 or RSA-FDH. If I remember correctly, RSA-PSS uses system randomness in the padding, unless someone derandomized it like Ed25519.

Some TEE could do the VUF/VRF, but that’s fairly weak privacy.

All of these still leak user identities to whoever runs the VUF/VRF. I’d expect their prover service learns user identities too.

It’s likely the privacy problem is solved badly, so then one option is provide AIP-61 but ask users not to use it both for their own privacy, and not to create data polution.

2 Likes

GNU Anastasis provides another avenue here: Ask users to have their own keys, but provide a secure backup facility, which could include OIDC & other identity mechanisms.

I think GNU Anastasis should trust the anastasis providers less than AIP-61 trusts the VRF provider.

1 Like

Yes, you’re right it does.

While my primary interest is re a token design that allows for the necessary participants, and consensus designs, I have found my self often thinking I could be persuaded that users are best served with a web3 router replacing their current Web2 router, which puts anything critical in their control. Obvious drawbacks to this abound (cost, availability etc.), but on the basis of the 80/20 rule probably not worse than the status quo.

While the Web3 router thought first occurred while looking into Mina, it recurred with Filecoin, and again with these AIPs. There are a couple of projects I can’t immediately recall that may fit the bill, so I wonder if this isn’t where the sensitive things you identify belong.

I don’t, currently, believe a consumer token would pose an obstacle to any of these approaches.

Interesting. I often find myself thinking of the $5 attack when reading many security protocols. In the BC context I’ve often thought giving the customer say 5 or 7 credit card sized pieces of plastic printed with recovery instructions and instructions on how to print a sticker with a QR code for a 3/5 or 3/7 recovery would be fine.

I bet the value of funds stolen from credit card distribution to ‘normies’ is absolutely or proportionally a small fraction of the funds the most tech savvy ‘degen’ users have lost by messing up some chains key management requirements, e.g. losing or forgetting a key that is impossible to remember when you infrequently use it.

A couple of times I’ve read some threat model and wondered why they don’t realize the biggest threat their users face is the chain’s own key design/usage requirements.
But then this isn’t really in my wheel house.

Is the following understanding correct. Bearing in mind the AIP-61 targets google, etc., but the idea here is to have chain-to-chain account creation, so there is likely some additional degrees of freedom available (nullifiers?).

On Substrate relay chains there is no guarantees around account privacy on the relay chain, incl Polkadot.
The AIP-61 canvases a way to use OIDC to setup an account without keys, this too has no privacy guarantees.

When AIP-61 turns to trying to introduce privacy there are wrinkles (“the privacy problem is solved badly”).

So the question for a Substrate chain would be which non-private account is to be preferred, the classically generated account or the OIDC generated account. The preference would come down to the trade offs the OIDC introduces.

Or is there some aspect I’m missing that makes the non-OIDC account strictly
preferable.

Hey folks :wave:,

Very happy you ran into our Keyless work!

I would be very glad to see this adopted / modified by other chains. I think the whole space stands to benefit. Let me know if I can help! We’ve built almost everything in the open (TXN validation logic, ZKP circuit, pepper service) and we will soon open source the prover service.

I appreciate the nuanced discussion on trade-offs / privacy. I’ll explain how we are thinking about things below :point_down:.

1. Privacy: hiding identifying user info inside a keyless address

Regarding…

There exist several questionable claims within the AIP-61 document, like:
[accounts] no longer contains any identifying user information (beyond the identity of the OIDC provider …)

…as you folks later clarify, hiding identifying user info is indeed possible (no question about it) by simply committing to the identifying user information in the blockchain address. Of course, this requires a blinding factor for the commitment, which we call a pepper (see here).

(I would love to clarify any other claims you found questionable.)

1.1 Pepper service: no need to remember peppers

We do not want the users to remember the pepper used to derive their address. That would defeat the whole point of this approach: it has to be keyless.

As a result, we recommend the use of a pepper service, which you can conceptualize as a privacy guardian for the account. It can be untrusted, if instantiated properly (e.g., MPC). Naturally, it uses a VRF for efficiently-generating peppers. To authenticate users before giving them their pepper, it uses the same [ZKPoKs of] OIDC signatures mechanism that the validators rely on to validate keyless TXN

A brief overview of the pepper service is here.

There are a few key points worth emphasizing about the pepper service:

  1. There will likely be an ecosystem of pepper services, each with their own implementations and guarantees. Dapps/wallets can choose their favorite, or write/deploy their own.
  2. It is possible to decentralize the pepper service on the validators. In our case, we can repurpose the Aptos distributed randomness infrastructure for this; see link 3 in “References”. (Although there will be other challenges left.)
  3. The pepper service can obliviously give users their pepper without learning their identity (i.e., the sub, aud and iss in the JWT). How? Via the use of an oblivious VRF carefully composed with the ZKPoK of a valid JWT.

It’s worth noting that we protect privacy in both directions:
(a) Passively-malicious Google does not learn its users’ blockchain addresses, nor their TXN history. (Actively-malicious Google can obtain pepper from pepper service on its own.)
(b) The blockchain nodes / validators do not learn the Google identities behind keyless addresses or TXNs.

Lastly, dapps/wallets have optionality. Suppose they do not like the idea of a validator-run pepper service (point #2 :point_up:), due to the risk of undetectable collusion by validators. Then, they could rely on another pepper service deployment (e.g., an oblivious threshold VRF service like Celo’s; see link 4 in “References”).

1.2 Oblivious pepper service

I really want to stress point #3: the pepper service need not learn the identity of the user it is computing the pepper for. This can be achieved without TEEs, without MPC, even for a single-server service: ZKPs + oblivious VRFs.

I hope this provides enough evidence to contradict the claim made above:

It’s likely the privacy problem is solved badly

AFAICT, the privacy problem can be solved very well. For example, as argued above, privacy can hold as long as the PoS assumption holds if the pepper service is decentralized on top of the validators and made oblivious.

This isn’t to say that we have currently implemented all of this; things take time.

2. Avoiding ZKPs

Regarding avoiding ZKPs and just putting the OIDC signature on chain (i.e., no privacy mode), we’ve implemented this too as an emergency mode, in case we need to temporarily turn the ZKP off due to bugs (see link 5 in “References”).

3. Beyond AIP-61

  • A talk I recently gave on keyless (see link 6 in “References”)
  • Slides for the talk above (see link 7 in “References”)
  • DM me, if needed (see link 8 in “References”)

Links

As a new user, I am unable to post more than 2 links, so including everything below without hyperlinks:

link 3: https://aptoslabs.medium.com/roll-with-move-secure-instant-randomness-on-aptos-c0e219df3fb1
link 4: https://docs.celo.org/protocol/identity/odis
link 5: https://github.com/aptos-foundation/AIPs/blob/f3a321184eb93825256fc31bc7bc9ebedb255989/aips/aip-61.md#warm-up-leaky-signatures-that-reveal-the-users-and-apps-identity
link 6: https://www.youtube.com/watch?v=sKqeGR4BoI0
link 7: https://docs.google.com/presentation/d/1nmDYfTiFKgAmPvsodkyrniV4USNdGUIGuWYRYaAxKgI/edit#slide=id.gc98149ca20_0_437
link 8: https://twitter.com/alinush407
7 Likes

Interesting, thanks for the links. Yeah, this sounds useful, so worth supporting somehow, probably a pallet which meshes with the existing accounts pallets.

Is GitHub - near/pagoda-relayer-rs: Rust Reference Implementation of Relayer for NEP-366 Meta Transactions based upon your code too?

The pepper service can obliviously give users their pepper without learning their identity (i.e., the sub, aud and iss in the JWT). How? Via the use of an oblivious VRF carefully composed with the ZKPoK of a valid JWT.

Cool. :slight_smile:

If you’ve multiple OICD service, then I suppose the your pepper service would not require a login per application, but simply one login when instantiating some new frontend, right? In other words, you hit the pepper service once when you first enter the password, but then afterward it works much like OIDC, and the user’s machine rederives the specific identities for different sewrvices themselves?

In polkadot, we typically use rust code directly, not smart contract langauges. We’ll soon-ish provide native calls for some elliptic curve operations, but we designed these calls to fit existing rust zk crate like arkworks, zcash’s zkcrypto, etc by simply repalcing the selected curve crates for the on-chain verifier, so when rust verifiers exist then they can be plugged in easily.

If a rust verifier does not yet exist, then arkworks’ circom-compat maybe useful here.

2 Likes

Actually users know their OIDC provider, so no reason this appears on-chain, right? Also, the chain & pepper server care little who provides the OIDC, so the only plaintext on-chain should be the pepper server, no? Along with balances or whatever.

You will need to verify that the proof of the signature corresponds to a currently valid public key of the provider, so that information must be on chain.

1 Like

It depends what their circuit does really, and NEAR’s one maybe different, but,.

Yes, there is however a lot of nasty here like certificate pinning, which these implementations may or may not do even in cleartext, so maybe not worth even trying to do in the SNARK.

Will be very neat to have a way to avoid relying on public keys on chain, but as well you have to consider proof generation times, since the prover is the main privacy leaking component. So, if the complexity of the circuit makes it unfeasible to run your own prover it hinders even more the privacy of the solution.

Afaik, there would still be unique identifier on-chain in this, but my question was more: Do we leak the OIDC provider? I think not necessarily since the user knows the OIDC provier. And the pepper server provides some secret.

It’s true however the (a) pepper server should not necessarily be trusted for account survival, and (b) the zk snark would be a bitch if it needed to verify a certificate chain. Simnpler is probably better here, even if less private.

since the prover is the main privacy leaking component

Why? Are the proofs all done on the user’s phone/laptop? It’s truew these have side channels, but not really something about which folks worry right now.

NEAR made some update to their scheme recently, where they extra some key from this process, and use that elsewhere. It sounds odd to me though.

The address must be derived from a JWT signed by a valid public key of the provider, so you need that public key on chain for verification.

The prover service, unless you run your own, learns the whole JWT including the salt/pepper to be able to generate the proof. You cannot trust that the signature is from a valid provider key because there is no way to bound the provider identity to the valid keys (are just downloaded from a well known location through HTTPS). The thing is that the signing key should match the one “known” on chain. If you decentralize the provers, colluding on that matter will be very unlikely.

As a rule, one should consider outsourced provers only when provers have some masssive batch proving optimization, not so likely here, so assume users run their own provers locally.

1 Like

@burdges and @alinush can you confirm the context and scope of what you have in mind here?

Specifically, my suggestion was in the context of creating keyless accounts between Relay Chains (L1’s outside of Substrate).

I did not suggest, and do not support, using this to onboard new users from outside the current ecosystems. @alinush I’ll DM you for the reasons for this. But you can see a use case outlined here:

You can think of this functionality as creating escape hatches for users in case either chain fails. Of course users still need a life raft once they have setup an escape hatch. But, baby steps.

This use case also has numerous subtleties to try to work out. But first there needs to be agreement that this is the target use case.

Some keyless account pallet with which individual dapps on parachains enable keylss accounts for their own users. NEAR has a bunch of dapps where users do not even realize they’re using a dapps.

In future, a relay chain should not have accounts. Afaik bridges would provide what users rquire here.

There are some risks if keyless accounts get used in staking, so we’d hopefully discuss forbidding that.

1 Like