Polkadot doesn’t scale infinitely. Like all systems, it is bounded by the limitations of decentralized consensus: economic security + computational & coordination costs. The argument for Polkadot is rather that it is a system of computational & economic leverage which allows for a) more computation to be done by splitting work across nodes and b) the benefits of high economic security to be attained for a relatively low cost.
As such it’s natural to have conversations about increasing the scalability & leverage which Polkadot’s tech can provide. Let’s not continue down this path too far, for the sake of remaining on topic.
Agreed. No matter what technology Substrate implements and Polkadot utilizes (do I have the relation correct?), this question will always arise for a Parachain - until the question is answered (which by definition means the question goes away - see below). The discussion above covers important things that can be done to kick the can further down the road - also valid and important to do that. But the question is still there.
My, poorly made, point is that things are not this bleak/immature.
Whatever the state of play in Substrate/Polkadot when this question arises for you there is a mature (?) pathway that resolves/removes this question from the table:
This does not remove scaling issues. But it does answer the question. To appreciate this, note the scalability issue is now:
A Relaychain reaches the limits of throughput. What options does it have?
As @rphmeier points out elsewhere, how many resources are at your disposal is critical, and will play a large role in determining which approach you take at the outset of your project.
I’d add, and how easily you can re-architect your application from a Parachain to a Relaychain … while your organization is under all the stresses and resource constraints of dealing with accelerating growth - think changing a website architecture while being slashdotted, but with two orders of magnitude more complexity
One important thing to discuss about “elastic scaling” of parachains is that the core architecture should allow parachains to scale to the rate of parachain block production. Parallelizing parachain block production by e.g. having blocks explicitly “lock” certain parts of the state trie is a very viable way to massively increase the throughput of parachains which can acquire multiple execution cores.
This is a very good idea imo. In general our blockspace markets could benefit substantially from cross time frame price comparisons. Ideally we’d maintain the following:
single block cost > 1 hour lease cost/block > 1 day lease cost/block > … > 2 year lease cost/block
In upholding this inequality, we would need to be flexible in which cores are allocated to which time frames. We couldn’t have a fixed number of two year parachain cores, for instance, without introducing market inefficiency.
The exception would be cores allocated to common good chains, as we’re willing to stomach some inefficiency to make sure the basic services they offer are provided.
Here’s a source of difficulty for price comparisons. What if we want short time frames paid for by fees and long time frames paid by lockup periods? Any ideas as to how we’d go about comparing these different pricing units?