Private Voting on Referenda

I’d like to present a motivation and an idea how we could enable private voting on Polkadot referenda.

Why vote privately?

Some may argue that one should stand by their beliefs and cast their vote publicly, like Polkadot governance works now where some people even de-pseudonymize themselves using the identity pallet. For institutional voters, I see the case for transparency here. But please hear me out why privacy matters [video]/[slides] for voting by individuals


If voting is not private, people tend to vote how they think others expect them to. That leads to very undesirable behavior of conformity, obedience and submission. In contrast, what I’d like to see in our ecosystem is: creativity, exploration and dissent


In the same presentation I share how Integritee can enable private transactions of any fungible asset on a substrate based chain and there is referendum 204 currently running to support this development.

An Integritee privacy sidechain on Polkadot will also allow any user to shield their DOT tokens from the Polkadot relay chain to the private sidechain. As this sidechain is based on a substrate runtime itself, it could be enhanced to mirror referenda on the relay chain trustlessly using its light client and people can cast their vote privately on the sidechain. The sidechain controls the sum of all shielded tokens on the vault pure proxy on L1 and can cast a split vote with the entire stake on the sidechain.

In other terms: people vote privately with their DOT on the sidechain and the result is committed to the relay chain by using X, Nay: Y)

The topic of conviction could be sorted by aggregation, but this would need a change to the convictionVoting pallet, as it currently doesn’t support conviction if split-voting

I’d be happy to elaborate on the technical solution, but I believe we should discuss about principles first, assuming that the technical possibility is at reach. Looking forward to your opinions


First off, I definitely think private voting on referenda is absolutely necessary long-term.

What would prevent a zero-day exploit of SGX (or whatever TEE implementation is used) or a misbehaving chip manufacturer from diverting votes?

Conviction voting simply cannot be supported from a parachain since there is no way to prevent the parachain from altering the control semantics and effectively allowing the original owner to change and exit Polkadot. Thus any anonymisation of conviction votes would really need the vote-counting to happen on the same chain as the privacy logic.


I think we need to distinguish between integrity and confidentiality to approach this question:

If we diversify TEE manufacturers, we can prevent a single manufacturer to mess with votes because the others won’t accept the block. So, diversification benefits integrity and can protect from manipulation.

On the other hand, diversification weakens privacy guarantees, because it only takes one TEE technology to leak the state and privacy will be gone. By pruning the sidechain and rotating keys, we can gain some forward secrecy, but we can’t prevent operators from storing pruned blocks forever and replay them once a vulnarability can be exploited. Maybe a combination of diversified TEEs with MPC may mitigate some of these risks, but TBH I haven’t looked into that angle deep enough yet to defend it here.

Still, I would claim that we should ask the same question to cryptographic methods for vote privacy as well, because there, privacy is a temporary thing as well. A race against time.

Yeah, that is not trivial. In the case of a parachain, conviction could be enforced if the para is a system chain, right? Then you could be sure that the para logic doesn’t change without going through relay gov.
In the case of sidechains, we might be able to do the something similar:
If we assume the voting sidechain is an L2 to the Polkadot relay chain, we could enforce that runtime upgrades of the sidechain are only possible through relaychain governance. This could be enforced in the sidechain enclaves where the convictionVoting and related pallets ensure that conviction really locks the funds for the correct duration. As the sidechain has a light client into the relaychain, it can read upgrade triggers trustlessly and enforce them by the integrity guarantees of TEEs.

We typically assume that TEEs winds up broken every 6-12 months, and worse adversaries could find their own private break, so maybe not suitable for governance, although users trusting TEEs for their own smaller concerns remains fine.

We could discuss doing this given some “turn style”, meaning the DOTs become transparently tied to the TEE parachain, but this inherently leaks who uses the system and who does not do so. We’d dislike too many people using this however, but then anonymity loves company, so that’s still not great either.

I’m unsure whether TEEs really support “subversion resistance zero-knowledge”, which involves “perfect” reproducible builds for the enclave, but maybe problematic depending upon code encryption enforced by the enclave.

We envision know stronger alternatives…

Alistair and/or I were asked by various people to work on private voting in governance, for the reasons you give here, ala real government elections consider privacy essential. We’ve not yet done so seriously because so many more immediate concerns remain, but…

We could do much if (1) account encode their metadata in a Pedersen commitment, (2) account prove correctness with SNARKs that open this metadata, and (3) accounts encrypt their metadata to themselves so they could produce these SNARKs without off-chain data.

This is quite a lot more complex than what we have now, but it’s doable once governance kinda stabalizes. I’m unsure TEE scheme really facilitate governance churn either, aka deployment issues. We’d need more CPU for verifying transactions too, but we could mitigate this if staking ran on a system parachain.


You’d trust the TEE not merely for soundness, but also for privacy too. If you dislike trusting the TEE for privacy, then your counting protocol incurs similar complexity to the counting protocols for SNARK schemes: You subdivide your vote to hide the balance, encrypt your partial votes to specific upcoming Aura or Sassafras slots, and those block producers create SNARKs of counting the votes they saw.


It depends how quickly people want private voting, what assurances everyone expects, etc.

It seems like everyone went straight to the technical implementations and made the assumption that voting privately is in anyway beneficial. (Or maybe you’re just thinking of if it’s possible prior to discussing if it should be done?) The problem with voting on these refs that take endless hours to review, it’s very easy to get something wrong, or view things incorrectly. Then, as a proponent you’re going to have no idea who you need to talk to, or listen to in order to get your proposal through. It was today @brenzi that we had discussions on refs you’re involved in, if I was voting anonymously, what would you have done? What would you have been able to do?

IMO, any form of anonymous voting will only further strain the relationship between proponent and treasury. There is already enough strain here to make some proponents snap and make irrational decisions or try sneak tactics to get their vote approved.

Additionally, how will we know if a ref is passing because the proponent voted in favor of their own proposal? Attacks against the treasury would go completely unseen. It is suspected by some that Kusama ref #180 was voted on by the proponents. These dolphin accounts have been staking for 1 years and their only extrinsic executed were that of the vote on this ref. They did have enough KSM to forcibly pass it if they had voted with higher conviction, it might be only by the skin of our teeth and the incompetence of an attacker that the kusama treasury was not the victim in #180.

Simultaneously, it’s hard to vote Nay for numerous reasons highlighted by numerous individuals. I don’t think I really need to go in-depth into it, it’s all fairly obvious. For these reasons we are supportive of a higher minimum passing threshold. (Assuming HACN stops HACN’ing)

I understand that reasoning and we have had similar discussions already. To me, TEE vs ZK discussions most often boil down to: Do you want 80% today, or maybe 98% tomorrow

1 Like

My luck was that you chose to be identifyable and that you were willing to discuss our proposal with me at length. I do value this and you have demonstrated more accountability than many councillors have (identifyable too) who just vote or abstain and refuse to comment on the why.

Even in a privacy-preserving voting system you can choose to disclose your vote and defend it

1 Like

Decred’s approach to achieving private voting is worth reviewing and is very different to other PoS networks given it is probably the most mature governance focused network - it is achieved through tickets that are a subset of governance per the docs:

Tickets are selected pseudorandomly according to a Poisson distribution. The average time it takes for a ticket to vote is 28 days, but possibly requiring up to 142 days, with a 0.5% chance of expiring before being chosen to vote (this expiration returns the original Ticket Price without a reward).

To square the circle of private voting and iterative proposal development, we can rethink how proposals are formed in the first place and how they create teams/collectives that can in turn have their own internal voting policies but effectively operate as 1p1v (1collective1vote) in a network.

So the collective’s vote Y/N can be public and therefore acknowledged by proponents, but their internal politics could be shielded using a form of decred’s vote randomisation.

There will never be a perfect answer to this, just as any voting committee will never give perfect feedback - if you’ve ever pitched / proposed to government orgs, agencies, grant giving orgs you will know that it is a game of luck, timing, relationships and of course merit. The ‘best proposals’ are often not the ones that pass, but the ‘right ones’ given the culture will.

1 Like

I’d say it’s a fairly wide-scoped problem. You’d somehow need to know that the only tokens being used in conviction-voting are definitely not going to move accounts within the time limit after being used in a vote. I doubt chains would want to submit to Polkadot governance processes in order to upgrade and guarantee this and I doubt Polkadot governance would have the throughput to adequately audit all upgrades.

The only way I can imagine it working would be through SPREE.

1 Like

Privacy regarding voting is a fairly basic requirement for governance to work and helps protect the system from malicious external influence. As a rational actor I behave first and foremost in my own interests, and secondarily in the interests of Polkadot. Now, generally speaking, the two are in alignment. If I honestly think that voting X is good for Polkadot, I’ll vote X. However if some externality (e.g. Agent Smith or a guy with a clawhammer) credibly says “I’ll make it very much in your interest to vote Y” then I will vote rationally, but dishonestly and against the interests of Polkadot. It becomes harder for them to do this if the voting is anonymous. Indeed, the harder it is for them to credibly believe that I did indeed vote one way or the other, then the less influence they can exert on me as a voter. In a perfect system, I would vote merely by the power of thought and it would be impossible for any third-party to know which way I voted or with how many votes; they would only see the overall (binary) result.


Even if we skip the dangers of relying on TEE-stuff, I have still not seen any private voting that even on the concept level would hide the impact of whales voting.

Basically we’re talking about a corporate vote by proxy model. In this model the typical shareholder would not be a direct beneficiary of the vote, the only benefit they would receive is maybe a change in leadership, or direction. In our case, the voter can and often is a direct beneficiary. While I empathize your point, the subtle differences between a corporate shareholder model and a software tokenholder model, IMO, will lead to a different set of consequences and behaviors.

When we were the proponent of the IBP, we voted on our own proposal. Many large addresses came to vote in favor of it as well (:heart:). It seemed important to those individuals that we not be the only deciding vote on that ref. There are many rationales of why they may have done this. But one of them was likely that it was important that it not be just the beneficiary of the ref who determines the outcome of the ref. How do you square this situation in a corporate shareholder vote by proxy model? As mentioned previously, KSM #180 was possibly a bad actor type situation that should have gone through even with HACN voting against it, because their KSM weight was greater than HACNs, if they had applied convictions.

In a pseudonymous setup it’s possible to draw some limited conclusions based on-chain behaviors. This is not possible in a true anonymous voting system. Isn’t pseudonymous enough? If we do ultimately proceed down an anonymous route, can we at least limit the conviction to <= 1x ? This way it is incentivized to be public or at least pseudonymous over anonymous.

What you request isn’t only privacy. It is also the absence of a receipt/proof on what you voted (otherwise you could still sell your vote and prove to the buyer that you voted in the specified way). The best solution for this known to me, is that you provide the voter with a proof for each option (Aye, Nay, Abstain), all of which are valid. It is still possible to let the voter know which of the proofs have been counted, but the voter can then decide, which proof to present to the buyer - rendering the whole vote-buying deal meaningless

1 Like

It seems like everyone went straight to the technical implementations and made the assumption that voting privately is in anyway beneficial.

Yes, there is something to discuss here. As a comparison, abstractly validators being anonymous looks more problematic: We can increase the threshold to pass a referendum, but we reduce liveness if we increase the 2/3rds honesty threshold.

IMO, any form of anonymous voting will only further strain the relationship between proponent and treasury. There is already enough strain here to make some proponents snap and make irrational decisions or try sneak tactics to get their vote approved.

Yes, anonymous treasury proposals sounds messier, not sure the facts on the ground though.


We should discuss & clarify our actual goals for private voting. It definitely adds complexity to stop people proving how they voted, aka receipt freeness, but yeah it’s maybe one of our main interests.

The best solution for this known to me, is that you provide the voter with a proof for each option (Aye, Nay, Abstain), all of which are valid. It is still possible to let the voter know which of the proofs have been counted, but the voter can then decide, which proof to present to the buyer - rendering the whole vote-buying deal meaningless

Afaik TEEs could always defeat approaches which merely produce multiple proofs. TEEs are secure enough for this attacker, due to users being divided/disorganized.

It’s maybe possible for users to “update” their vote by producing another vote which reverses it, and then reduce knowledge of the reverse to knowledge of the secret key. An adversary could detect that some parties do this if enough users vote.

Amusing idea: You’ve non-anonymous initial votes, but vote counting is probabilistic and a ZK-MPC thing. If you submit a retraction then your vote will be undone, but nobody will know that it wasn’t just bad luck. I doubt this quite accomplishes our goals, since we presumably want to pass code upgrades even when a powerful figure dislikes the code upgrade. It’s interesting anyways.

To me, TEE vs ZK discussions most often boil down to: Do you want 80% today, or maybe 98% tomorrow

We’re primarily concerned about the security loss from TEEs, so it’s not whether it delivers enough features, but whether we’d ever sacrifice the security.

Agreed. And I see there’s a big difference between a private-tx sidechain which only causes risk for those who opt-in to use it - and private voting for L1 governance, where the risk is taken by the entire network.

I see your concern about accountability of heavyweights. But I think I have a better idea, and it won’t come as a surprise: What if we bound the voting power of individual entities and people by making private-voting sybil-resilient? At least to a bounded extent? This could even be the basis for quadratic voting which cannot work without sybil-resilience.

Imagine you could only vote (privately) if you provide proof-of-personhood (i.e. decentralized from Encointer, or through centralized/federated KYC registered on KILT?). This is not fantasy: We could basically combine this, this, and this, all rather low hanging fruits by now

This way, we would protect privacy of individuals and address my concerns in the OP, while conserving pseudonymous accountability for institutions

dumb question - but irrespective of the technology used to shield voting, is the elephant in the room not the fact that pretty soon governments are going to outright ban privacy tech?

decred was just delisted from Binance alongside other privacy focused coins - hence why they built a fee-less, non-custodial atomic swap dex with mesh server - but they are fully, militantly of the original cypherpunk culture and definitively not trying to charm regulators/institutions in the way W3F and DOT is.

if polkadot voting ends up perfectly private, will this not also ensure challenges with regulators and indeed then with those utilising the tech?

guess this comes down to the hill you’re willing to die on…

thanks for the tough ball: Technology matters here. TEEs have a higher chance of meeting compliance requirements than pure cryptography: TEEs make privacy rules programmable - and governable

1 Like

Ain’t so clear that “pretty soon governments are going to outright ban privacy tech”. Yes, some government agencies try, but overall business requires privacy tools. You’ll eventually need the “escape velocity” of becoming critical for the wider economy regardless.

I doubt there is much difference between TEEs and ZKPs/MPCs on the regulatory compliance front, except that TEEs have a ready made backdoor for the US, maybe UK etc, and likely China, but also for anyone who can hire the side channel attacker talent. We do know AML/KYC tricks for ZK transactions too, which I’d hope we deploy long before this stuff.

This is the gist of it. Well said.

1 Like