Hi team,
I’m exploring the economics of JAM and Coretime, especially how DOT demand might scale when chains like Mythos (FIFA Rivals, Pudgy Party) move into the JAM model. I’m trying to reconcile two things:
Current Parachain Model (Relay Chain)
A chain leases a full core at a flat rate: 96 DOT per month per core
That’s ~1,152 DOT per year for one full-time core
There’s no metered or usage-based scaling — it’s flat rent regardless of actual usage
JAM Model Understanding (Requesting Confirmation)
My understanding is that under JAM:
Cores can be rented elastically by the second, not just month
Chains or agents can spin up/down multiple cores based on need
DOT is paid per unit of Coretime consumed, and a portion (e.g., 20%) is burned
This creates usage-based, scalable demand for DOT — more apps/agents/users = more DOT spent and burned
Projection Example (Wanting to Verify This Logic)
If FIFA Rivals brings 5 million users, and each user on average triggers just $10 of Coretime compute per year, that’s:
$50 million in total annual demand
At $5/DOT → 10 million DOT paid for Coretime
If 20% is burned → 2 million DOT burned annually
Key Questions
-
Is this a valid way to model JAM-era demand?
(i.e., based on user-driven compute spend rather than static core rents) -
Is the 20% burn share still accurate in current economic discussions?
-
Will Mythos and similar chains eventually transition to this dynamic Coretime model vs. sticking to fixed slots?
Thanks in advance — just want to make sure I’m not blending the economics of the current relay model with what JAM will introduce.
Best regards,
DotSama