Hey everyone, Kanasjnr here again.
After the really positive response on my first post (and the super helpful insights from @elizabeth-crites, @danicuki, and others), I wanted to go deeper into the practical details.
This post builds directly on the first, but I’ve slowed down the explanations, added more analogies, included concrete examples and even a new section diving into ticket mechanics and Ring VRF internals (since it was mentioned in comments).
Everything here is based on:
- The Web3 Foundation three-part blog series (SSLE details in Part 2, comparisons in Part 3)
- The full Sassafras research paper (security proofs, Ring VRF usage)
- The JAM Grey Paper, especially §6 on Safrole block production (with equations 6.1–6.12 for state transitions)
- Logical extrapolation of how this will feel in actual validator operation
If I’ve got anything wrong, feel free to point it out this is based on my research and careful reading, not official docs.
Inside Sassafras: Ticket Mechanics, Ring VRFs, and Deterministic Slot Assignment
“How does it really work?” Let’s tackle that head-on.
This is protocol-level exposition on ticket generation and why Ring VRFs are essential. I’ve kept steps and math from the papers for reference.
From Per-Slot Lottery to Epoch-Wide Sampling
Today (BABE): Each slot is independent. Validators compute:
VRF_sk(epoch_randomness || slot_number)
If the output < threshold, you’re eligible.
Independence creates Poisson variance:
- Probability of an empty slot: (P(empty) = \prod(1 - p_i))
- Probability of multiple winners: >0 → forks and empty slots
Sassafras: Tickets are sampled once per epoch and leadership is precomputed.
There’s no per-slot randomness during executiononly at epoch boundaries.
Epoch Randomness as Anti-Grinding Anchor
Let:
- (R_e) = finalized randomness from epoch (e-1) (from the beacon, bound to chain—prevents retroactive influence)
- (V) = set of active validators
At the start of epoch (e), validator (i) computes multiple ticket candidates to ensure enough winners:
ticket_j = Ring_VRF_sk_i(R_e || j) # j = counter for multiplicity
Multiplicity (e.g., ≥6 for 100 elections, per paper) tunes expected winners ≈ slots.
Ticket Structure
ticket = (y, π, commitment)
Where:
- y = pseudorandom scalar (sortable; lower y = higher priority)
- π = ring proof (verifies y came from some sk in V, without revealing which)
- Commitment ensures later claimable only by owner
Deterministic Ordering
All valid tickets (T) are collected and sorted ascending by y.
The top N (N = slots) become the leader schedule:
slot_k ← k-th smallest ticket
This guarantees one primary per slot, with a deterministic epoch-wide order, provided (T) is known.
Why Ring VRFs? (Mechanics Breakdown)
A normal VRF reveals the signer via pk.
Ring VRFs hide the signer within a ring (R = {pk_1, …, pk_n}).
Setup: Global parameters from SRS. The ring uses Bandersnatch curve for speed.
Evaluation by validator i: Input (x = R_e || j), compute (y = VRF_sk_i(x)), produce (\pi) proving “y correct for some sk matching pk in R.” Proof size is constant/log in n.
Verification: Check (\pi) against (R, x, y). Accept if valid from some member, but without knowing which.
In Sassafras: Early for anonymous tickets. Later, normal VRF reveals identity in the claim phase.
- Anonymity window protects leader privacy just long enough for execution.
- “Relaxed” because it’s not permanent, but sufficient (>2/3 honest = progress).
Grinding attempts are blocked since (R_e) is bound to the chain.
Variance now comes from validator reliability, not protocol randomness.
This structural shift defines Sassafras: randomness in selection, determinism in execution.
1. Deterministic vs Probabilistic Production: From Lottery Chaos to Hidden Schedule
Step-by-step explanation of the change
Today (BABE):
- Every 6-second slot is an independent lottery.
- Each validator computes a VRF output → if it’s below a threshold (determined by stake weight), you’re a winner for that slot.
- Because it’s random, the number of winners per slot follows a Poisson distribution (λ ≈ 1):
- Sometimes 0 winners → empty slot → chain pauses for 6 seconds (or more if unlucky streaks happen)
- Sometimes 2+ winners → multiple blocks proposed → fork → network resolves via longest-chain rule
- Result: Block times are variable (exponential distribution around 6 s), throughput is inconsistent, latency jitter is high.
With Sassafras/Safrole:
- Once per epoch (~4 hours = 2400 slots), the entire set of leaders is elected in batch.
- Validators generate tickets using ring VRFs → tickets are anonymously published → sorted on-chain → sorted list becomes the secret leader schedule for the epoch.
- Each slot gets exactly one assigned leader (unless attacked → skipped).
- Identity is hidden until leader claims the slot (revealing the ring VRF proof).
- Result: Block times very close to exactly 6 seconds under normal conditions.
Real-world analogies
- BABE: Public lottery every 6 seconds. Sometimes nobody wins → shop closed. Sometimes 3 people win → fight over who gets served first.
- Sassafras: Secret pre-booked appointment calendar for the next 4 hours. Everyone knows exactly one appointment per slot, but nobody knows who has which slot (except the current leader when it’s time).
What this means for throughput and user experience
- Fewer empty slots → chain grows more steadily → effective TPS increases even if peak TPS stays similar.
- Lower latency variance → dApps (DeFi, gaming, oracles) get predictable confirmation times.
- Fewer forks → less time wasted on fork choice → GRANDPA finality reaches depth faster.
Quick math reminder (from Part 1):
- Honest block probability ≥ 4/9 ≈ 44.4%
- Adversarial ≤ 1/3 ≈ 33.3%
Even if 1/3 of leaders are attacked/skipped, honest blocks dominate → chain grows reliably.
Bottom line for validators:
You go from “pray my VRF wins this slot” → “I have a small number of known turns; I need to be online and ready exactly then”.
Workload more predictable, punctuality more important.
2. Validator Operational Impact: What Actually Changes on Your Server?
Hardware & software requirements
- Almost no change required.
- CPU (VRFs), RAM, fast SSD (state trie), good network still needed.
- Ring VRF uses Bandersnatch curve (Substrate supports via bandersnatch_vrf crate).
- Batch ticket generation once per epoch → CPU usage is bursty, not constant.
- More multicast traffic during Phase B → slight bandwidth increase at epoch transitions.
Network latency & DoS sensitivity
- Today (BABE): Timing races matter missing your slot is costly.
- Sassafras: Timing is fixed → 6 seconds to produce once it’s your turn.
- Marginally slower? Less catastrophic.
- Identity leaks → DoS target for that slot.
Uptime & maintenance
- Today: Missing a slot is annoying but recoverable.
- Sassafras: Missing pre-assigned slot → empty block → hurts chain liveness & rewards.
Positive: Can plan maintenance outside assigned slots.
Negative: No more “quiet period” for restarts.
Reward consistency
- Today: Highly variable. Some epochs produce many blocks; others almost none.
- Sassafras: Smoother, proportional to stake across epoch (minus skips/attacks).
Smaller nominators benefit from predictable weekly earnings.
3. Liveness Under Stress: What Happens When Things Break?
Normal case (<1/3 offline)
- Chain produces almost exactly one block every 6 seconds.
Partial offline (~20–25% down)
- Sorted ticket list acts as backup queue.
- Leader offline/attacked → skip → next honest leader produces block.
- JAM Grey Paper fallback: if ticket gen fails → direct Bandersnatch key series (round-robin). Rare mode.
Network partition
- Two partitions may temporarily fork.
- When reconnected → GRANDPA chooses chain with most honest weight.
- Fixed slots → partitions more visible, but resolution same.
Targeted DoS attacks
- Malicious repeaters leak identities (≤ f ≈ 1/3).
- Adversary DoS-es leaked validators → those slots empty.
- Honest block probability ≥ 4/9 > adversarial ≤ 1/3 → chain grows.
Analogy: Queue of 10 people. First 3 blocked → 4th speaks. As long as majority free → conversation continues.
4. How it Fits into Polkadot 2.0 & JAM
Sassafras/Safrole = foundational infrastructure for Polkadot 2.0.
Elastic Scaling & Agile Coretime
- Polkadot 2.0 removes parachain slot auctions → parachains buy “coretime” on-demand or in bulk.
- Sassafras → predictable relay chain blocks → multiple parablocks reliably submitted.
- Stable 6s rhythm → coretime buyers get consistent execution guarantees.
JAM Architecture
- Safrole is the block production engine.
- Generates high-quality entropy η (from VRFs) → used by JAM components.
- Maintains validator sets ι (pending), κ (active), γ_P (next epoch keys).
- Uses Bandersnatch ring VRF → sealing keys pseudonymous.
Summary: Sassafras/Safrole = heartbeat for JAM/P2.0.
Final Thoughts
Sassafras moves Polkadot from “random, bursty, sometimes stalled” → “hidden, scheduled, steady, skip-tolerant” production.
For validators:
- More predictable workload
- Smoother rewards
- Higher uptime & DDoS importance
- Less variance → better planning
For the network:
- Higher effective throughput
- Lower latency jitter
- Stronger foundation for Polkadot 2.0 / JAM scaling
What do you think?
See you in Part 3
(maybe MEV implications or staking economics under Sassafras?).