Crossposted from Kusama ref 469
I thought I’d weigh in here after talking about it a bit on the Fellowship call yesterday.
Like I mentioned on the call, I 100% agree that the current pricing model falls short with a higher core count, and have been exploring options for a next iteration of the pricing adapter which takes more information into consideration and more closely aligns with one of the goals (lowering the barrier to entry), so watch this space. However I think that minimum pricing is not the way to solve some of the issues described here.
I would like to separate the cost of production into a different discussion and not cover it here. It was a useful starting point to estimate the starting price where we had no historical data of demand-driven pricing, but I don’t think it has a bearing on the analysis of the current problem
I’d like to rephrase the problem statement as the following:
The current price adapter falls short with higher core count, failing to provide a target price for the next sale which seems to represent the market sentiment over time (not just one off variations due to temporary market conditions) with the effect that the cost spread is decreased, even with a large lead-in factor, arguably allowing the cores to fall to a “fastest finger first” market and becoming a non-deterministic barrier to teams who want to start a project on Kusama.
By non-deterministic I mean that even if you know you’re willing to pay 10000KSM (insert ridiculously high number here) to get a core, if the lead-in period starts much lower you’re not able to show how much you value it and are forced to offer the ceiling price in the absence of secondary markets, therefore you have no option to outbid people who value it less than you and somebody could bulk buy all cores, beating you to it.
What I’d suggest is to use a part of the configuration that has been overlooked, the ideal_bulk_proportion
. From the docs:
The proportion of cores available for sale which should be sold.
If more cores are sold than this, then further sales will no longer be considered in determining the sellout price. In other words the sellout price will be the last price paid, without going over this limit.
pub ideal_bulk_proportion: Perbill,
So if we establish a rough heuristic that when we add cores, if we’re not aware of any increases in demand, we can set this to the proportion of the old core count to the new core count and maintain the previous price finding behaviour.
If we do then get the increase in demand then we end up with the opposite problem with runaway upward pricing, but this can be easily adjusted with a referendum to set the configuration.
Applying this heuristic with the benefit of hindsight, we could say that when we increased the core count from ~60 cores to 100 cores, we should have also decreased the ideal_bulk_proportion
from 100% to 60% to get a naive equivalent. I think it was a useful test for the pricing adapter, but I think that now testing this heuristic that I’ve proposed is good for Polkadot while also addressing some of the issues raised here. This could be achieved without code changes and just make a referendum to change this value in the broker configuration.
Since we have had several sales with this in place, maybe we could consider a more brutal cut for a few sales to speed up the return to what governance deems an equilibrium point by decreasing the ideal_bulk_proportion
lower than 60%, then a future referendum could adjust it to what is seen to be a stable price position that balances our aims.
To be clear, I still consider this a temporary adjustment until a pricing model rework, and it could be largely mitigated with trustless secondary markets. Also it could be argued that Kusama should be allowed to be Kusama too, but I think that this is a useful test for when the Polkadot core count increases while also addressing some complaints.