The Sybil Trilemma — Why Physics Is the Answer

The Sybil Trilemma — Why Physics Is the Answer

This is the second post in a series expanding on our governance stack overview. Today: Layer 1 — the Iris Oracle.


Every Proof of Personhood system must satisfy three properties simultaneously:

  • Uniqueness — one human, one identity, no duplicates

  • Privacy — nobody learns which human holds a given identity

  • Decentralisation — no central authority decides who counts as human

No existing system achieves all three.

KYC sacrifices privacy. Worldcoin sacrifices decentralisation through proprietary Orb hardware — one company controls who gets verified. BrightID sacrifices uniqueness through collusion vulnerability. Social graph systems require a trusted bootstrapping community. Each solves one or two corners of the trilemma and quietly concedes the third.

The Iris Oracle takes a different approach: instead of asking “what does this person look like?” or “what do they know?” — it asks “how does their involuntary nervous system respond to an unpredictable physical stimulus, right now?”


The Core Insight: Physics Does Not Lie

The pupillary light reflex (PLR) is controlled by the autonomic nervous system. When light intensity changes, the pupil contracts — with a characteristic latency of 200–500ms, a physiologically bounded response curve, and natural micro-fluctuations (hippus) that no static image can replicate.

A photograph does not have a nervous system. A deepfake does not have one either. A pre-recorded video response fails the moment the challenge is unpredictable.

This is the fundamental shift: from appearance (fakeable by anyone with a good GPU and enough training data) to involuntary biological response to a real-time unpredictable physical stimulus (requires simulating a living nervous system in real time).

The challenge sequence is generated by on-chain VRF — unpredictable, unique to each verification moment, bound to a specific block hash. You cannot pre-record the correct response because you cannot know what the challenge will be.


The Four-Factor Model

Four signals are captured simultaneously in a single 10-second optical moment:

Factor Signal What It Proves
Iris IrisCode (266 degrees of freedom) Who are you? False match rate < 1 in 1.2M
Pupillary Response PLR correlation with VRF luminance challenge Are you alive and present? Autonomic, involuntary, physiologically bounded
Corneal Reflection Purkinje point tracking Are you looking at THIS challenge? Position and colour must match the VRF trajectory
Heart Rate Variability rPPG (camera) + PPG (smartwatch) Are you unique and willing? Cross-validated liveness; individual HRV signature

The cross-device HRV validation deserves a note: if an attacker points their phone at a puppet face while wearing the smartwatch themselves — the camera captures one heart, the watch measures another. These are detectable.


Vote-Moment Binding: No Credential to Steal

A critical design decision distinguishes the Iris Oracle from credential-based systems.

The proof is not generated once and stored. It is generated at the exact moment of each action, bound to the block hash of that specific vote:

  1. Chain generates a VRF challenge unique to this block

  2. Device displays the challenge while camera captures the eye

  3. On-device ZK-proof computed — raw biometric data never leaves the device

  4. ZK-proof and vote submitted as a single atomic transaction

There is no credential to steal. The proof is the action. You cannot sell your vote because the vote is literally chained to your body at the moment of casting. You cannot buy someone else’s voting power without being physically present as them, in real time, for exactly the challenge generated at that block.


A Candidate Fifth Factor: Tissue Impedance Spectroscopy

Living biological tissue has frequency-dependent impedance described by the Cole-Cole model: cell membranes behave as capacitors, ion channels as resistors, and the combined response at 1 kHz is measurably different from the response at 100 kHz.

The proposal: the VRF generates a randomised frequency sweep sequence. The smartwatch passes weak currents through the skin at each frequency. The measured impedance must follow the Cole-Cole curve — and because the sequence is unpredictable, no pre-recorded response can match it.

This is the same challenge-response principle as the PLR, applied to electrical signals rather than optical ones. An attacker with a rooted smartwatch who injects a fake PPG/HRV signal must simultaneously produce a frequency-correct tissue impedance response to an unpredictable challenge. Two independent physical phenomena, mutually consistent, in real time.

Our key open hardware question: Is the Cole-Cole frequency response measurable at sufficient signal-to-noise ratio using the dry capacitive skin electrodes present in ECG-capable consumer watches (Apple Watch Series 4+, Samsung Galaxy Watch)? Wet gel electrodes provide better contact — consumer watches use capacitive dry contact. We do not have experimental data on this.

If anyone has worked with BIA or ECG morphology on consumer smartwatch hardware, we would particularly value your input.


What We Evaluated and Rejected

The four-factor model reflects a longer evaluation. Several candidates were seriously considered and rejected — we document them because the reasoning matters:

Geomagnetic field — A $50 electromagnet overwrites any smartphone magnetometer. Signal-to-noise on consumer hardware is insufficient. Rejected.

GPS coordinates — Directly includes location, violating the privacy requirement. Even coarse ZK-proofs over GPS are highly re-identifying over time. Rejected.

DNA hybridisation — The strongest possible biometric, and the privacy objection is largely solvable with ZK on-device processing. Two fatal problems remain: hardware centralisation (calibrated reagents require a supply chain — the Worldcoin Orb problem in biochemical form), and irrevocability. You can re-enrol your iris if the ZK scheme is ever compromised. You cannot change your genome. Rejected on irrevocability grounds.

ECG morphology — The P-wave, QRS complex, and T-wave are individually unique and stable across years (~95%+ recognition accuracy). Apple Watch Lead-I ECG uses exactly the two-electrode configuration we describe for TIS. This is a genuine candidate for extending Factor 4 — the constraint is accessibility (arrhythmias, pacemakers, certain medications alter morphology). Not rejected; flagged as a strong candidate for a future protocol extension.


Three Experiments Before a Formal DIM Proposal

We are transparent about what remains unvalidated:

  1. Purkinje reflection imaging — does consumer smartphone resolution support corneal reflection tracking at sufficient precision?

  2. rPPG–PPG cross-validation — can two different hearts be reliably distinguished at smartphone signal quality in real-world conditions?

  3. PLR demographic validation — what are the false rejection and false acceptance rates across age groups, medications, and ambient lighting conditions?

A working proof-of-concept (Python, MediaPipe, OpenCV) demonstrates that pupillary response to visual stimuli is measurable on commodity webcams and that correlation analysis can distinguish biological responses from static or pre-recorded signals. The three experiments above will determine whether the hardware assumptions hold at population scale.

Interesting approach. I have some initial questions:

  • Would this be deterministic?
  • How does the PLR and HRV vary over time as people age? Is this taken into account?
  • How to determine that 2 proofs from the same human (for example before and after being sick, resulting in different HRV) are from the same human and not 2 different humans?
  • How often would the human be required to repeat the proof?
  • Where and how are the proofs compared to one another? Is there a database?

Great questions, here are my views on that:

  1. Deterministic?

The challenge is deterministic (VRF-derived, reproducible given the seed). The response is
probabilistic but bounded — a human pupil will always respond to a brightness change within
200–500ms, but the exact diameter at any millisecond varies. The system doesn’t require exact
reproduction — it requires that the response falls within physiologically valid bounds. Think of it
like a CAPTCHA: the challenge is precise, the valid answer space is a range, not a point.

  1. PLR and HRV variation with aging?

PLR: Pupil reactivity decreases with age (smaller baseline diameter, slower constriction). This is well-documented and can be modelled — the verification bounds are age-adjusted. The key insight isthat even a 70-year-old’s pupil still responds involuntarily; the response is weaker but still present and still unfakeable.

HRV: Decreases with age and varies with health status. HRV is NOT used as an identifier — it’s used as a liveness signal. The cross-validation between camera-derived rPPG and smartwatch PPG proves a living human is present. The individual HRV signature adds a soft biometric layer but is not the primary uniqueness factor (that’s the IrisCode).

  1. Same human, different health states?

The IrisCode is the uniqueness anchor — iris patterns are stable from ~2 years of age until death, unaffected by illness, aging, or emotional state (the iris texture itself doesn’t change; only the pupil size does, and IrisCode normalises for that). So: the sick you and the healthy you have the same IrisCode. The PLR response bounds may shift, but the identity match comes from the iris, not the reflex.

HRV changes between sick and healthy states are irrelevant because HRV proves liveness, not
identity.

  1. How often would the proof be required?

That’s a governance design decision, not a protocol constraint. Options:

  • Once per voting period (e.g. quarterly re-verification)
  • On-demand when casting a vote
  • Periodic refresh (e.g. annually) with session tokens in between

The paper proposes this as configurable per deployment. The cryptographic identity persists; the proof refreshes it.

  1. Where are proofs compared? Is there a database?

No central database. The IrisCode is hashed and stored as a commitment on-chain (People Chain).
Uniqueness is verified via a zero-knowledge proof: “my IrisCode doesn’t match any existing
commitment” without revealing the code itself. This is the same approach as Worldcoin’s Semaphore protocol, but without proprietary hardware — any smartphone with a front camera can generate the proof as well as a helath watch (for the HRV).

The PLR/HRV data is never stored , it’s used in real-time during the verification ceremony and then discarded. Only the cryptographic attestation persists.

All factors you suggest can at best convince a trusted device that it is in front of a real person. They can’t trustlessly convince a third party remote observer which would be necessary for proof of personhood. I have my doubts about HRV proving “willingness”, but I see more fundamental issues.

Let’s assume you open-source it all as you should. An attacker would then:

  1. Take any IrisCode (simply invented or possibly captured without the consent of the subject)
  2. Mock PLR, CR and HRV tests and always return “passed”
  3. Send a proof to the chain

AFAIU there is nothing that links IrisCode to PLR or HRV which you could use to harden the proof.

Worldcoin uses trusted hardware with certificates issued by a trusted party and is therefore centrally permissioned. You don’t (want to) have this luxury but be permissionless as I understand. Then any biometrics are the wrong approach IMO.

Thank you, and to be clear: I don’t claim to have a complete solution yet. This is an ongoing process I wanted to share with the community. The Iris Oracle is designed to try to make forgery significantly harder and economically irrational for most use cases, not to make it impossible. The full technical paper with the adversarial analysis is available on request if you want to go deeper.

I’d be happy to review your full paper if you consider submitting a proposal to the Kusama Vision PoP bounty

1 Like

A good report about other types of risks when using biometrics by Oxfam:

1 Like