Last year when I asked the Kusama community about what should our take be on JAM, whether we should try to share home with Polkadot’s JAM deployment or be “lightweight and independent”, the community agreed on the later option, a great choice IMO! Kusama has been underutilized for many years as a glorified testnet, but there’s a strong desire to realize its full potential as an independent hub for innovation and the leading edge of Web3 and the cypherpunk movement, a spirit well aligned with W3F’s Kusama Vision.
The main highlight of the wish for change is that Kusama infrastructure would need to be reduced to 32 cores(along with the number of validators), an estimation of the size required for a JAM chain to produce faster 1 second block times and an opportunity to bring new use cases to the ecosystem like real-time applications.
JAM is still under development and the reduction wouldn’t be needed for some time, however I see an opportunity to propose this reduction earlier and in a phased way, not only to prepare for what’s next to come but as a response to the market conditions and to the reality that there is virtually zero coretime demand in the ecosystem, we need to scale down, validators can’t pay their bills and there is no point in sustaining a huge infrastructure that no one(but the Virto team?) is using.
A gradual reduction
Suddenly cropping the supply of cores and “firing” most validators can bring a lot of unnecessary friction. I propose a gradual reduction over the course of 6 months to go from our current setup of 1000 validator(700 para-validators) and 140 cores arriving to a final setup of 24 cores backed by 5 para-validators for a grand total of 120 validators. Later when we prove there is demand for more than 24 cores we can always scale back up to 32 cores and even more if the technology allows it.
The referendum would be a single root proposal with a batch that schedules the following adjustments:
First set validators on the staking system to match 700 para-validators.
1 month later: set cores to 120, validators to 600
2 months later: set cores to 100, validators to 500
3 months later: set cores to 80, validators to 400
4 months later: set cores to 60, validators to 300
5 months later: set cores to 40, validators to 200
6 months later: set cores to 24, validators to 120
Next steps and thoughts
I was initially planning to propose this reduction since last year without asking much questions but I believe taking time to get feedback from different affected parties was the better approach, now taking even longer than I should have as I was busy(and lazy) to do run the necessary tests, but the time has come! I would like to hear thoughts from the community before submitting this referendum, there might still be topics to consider but the necessary calls are pretty much figured out.
I’m also curious to hear ideas about what comes next for validators, this change will reduce decentralization and could concentrate operations in the hands of a few big players like centralized exchanges… On-chain decentralized nodes program anyone?.
Glad to see this as a concrete proposal. The phased approach makes this workable — it gives validators time to plan and lets the community observe each step before the next one lands. A single abrupt cut would create unnecessary opposition from people who might otherwise support the direction.
The economics reinforce the case. Even under a reduced inflation model, per-validator rewards at 500 validators are roughly 13% above current levels, and at your final target of 120 they’re close to 5x. The “validators can’t pay their bills” problem gets better at every step of the reduction, not worse. That’s the argument that should win over the validators who might initially see this as a threat.
Starting at 24 cores rather than the 32 approved in WFC 573 makes sense given current demand. No point provisioning for capacity nobody is using. Scaling back up to 32 — or beyond — when activity justifies it is a more honest approach than maintaining infrastructure on speculation.
On the decentralization question — this is where it gets hard. At 120 validators, the barrier to entry rises and the set could consolidate around exchanges and large holders who can absorb the operational costs. Something like an on-chain nodes programme that factors in entity diversity or geographic distribution could help, but the design matters a lot — poorly structured incentives would just create a new set of problems. Curious whether others in the validator community have thoughts on what that would look like.
The reduction schedule and the broader economic picture are connected. Fewer validators means lower security costs, which opens the door to adjusting inflation parameters without hurting anyone’s bottom line. The burn-based tokenomics work in this forum was designed with exactly this sequencing in mind — cost side first, supply side second. Seeing the cost side move forward matters.
For efficiency sake, it makes sense to reduce the core count. However, for security purposes, a reduction in the validator count directly reduces the security of the network.
I know our favorite Canary isn’t doing much these days, but it strikes me as counterintuitive to risk established security by firing 80% of the validators.
From the perspective of a prospective parachain customer. I’d consider re-evaluating the parachain registration model too. Both Polkadot and Kusama’s real market offering is parachain validation. The proposal to reduce the validator set and cores should also take the cost to register a parachain into consideration. Right now, Polkadot is cheaper than Kusama to register; which seems backwards. And it has me thinking about the overall supply and demand for not just coretime but also registration.
Registering a parachain is a precursor for coretime demand. You can reduce the total amount of coretime available to current parachains, or you can lower the financial barrier to onboard new parachains to use the available coretime. Or both?
This registration cost is a static value in the native token that fluctuates with the market value of the native token. If there are a limited number of parachains that the relay chain can handle, then this value should adjust based on demand. Seems like we have been relying on the market of the native token to do this work, but that value is tied to many other things in the ecosystem as well.
We know there is low coretime demand on both networks, that’s because the chains that are currently registered are not utilizing it. So perhaps we should make it easier for new parachains to deploy. In the same way that agile coretime opens up the old slot auction model, I’m wondering if something along the same lines can be applied to parachain registration. We can find a mechanism to make more sense of this, just free flowing thoughts right now.
Again, there’s a limit to how many chains can be serviced by a relay chain. I don’t think having a static registration value is the answer. I think it’s implied that the token value would account for the market demand for parachains, but it’s tied to so many other things in the ecosystem, that it has a weaker and weaker correlation to parachain demand. A dynamic price model, similar to how coretime is structures seems obvious at a first glance, but open to counter arguments to this. In addition to this, Kusama could also define a registration with a limited window of time rather than a one time registration that only ends when the controllers of the chain unregistered from the relay chain. This window of time could be a set amount of time or could also be a dynamic value that the parachain could choose. For example, (Current monthly cost x number of months). Parachains could re-register on Kusama based on a multitude of factors, or consider moving to Polkadot for a longer term home.
Open to hear why I’m wrong or to consider drafting a proposal if community thinks this is a worthwhile idea to continue digging into and discussing.
When will you submit the proposal? The longer you wait, the greater the competitive advantage for DOT (and the competitive disadvantage for KSM), because the parameter change for the latter will be implemented in a few weeks.