The Elastic Scaling launch is right around the Q2 corner. There are just a few final pieces of the puzzle that need to come together:
- The Fellowship Runtime based on Polkadot SDK 2412-1 is being prepared. Once enacted, we are just one referenda away from enabling the RFC103 security feature.
- Polkadot Stable release 2503 is coming at end of March. It contains the slot based collator technology that supports elastic scaling and dynamic async backing parameters for the validators.
Elastic Scaling is the last missing piece of Polkadot 2.0 and at the same time is a game changer. In a nutshell it significantly improves the vertical scalability of rollups, increasing throughput and lowering latency.
So, what can elastic scaling do for you ?
It simply enables single rollups to leverage the multi-core architecture of Polkadot. The rollup (parachain) can adjust the number of cores it uses on the fly via the Agile Coretime interface. L2s can increase their throughput and decrease latency to match the load or expected end user experience.
The picture below shows how it works.
There is just one rule: the rollup still creates a chain of blocks that get validated in parallel on the relay chain. The faster your rollup can create these blocks, the more cores you can use on the relay chain every 6s. Remember, each core has 2s of execution and 5MB of availability (soon 10MB).
When do you need it ?
Depending on what you are building you will care about getting more of the following 3 things:
- compute (cpu weight)
- bandwidth (proof size)
- latency (block time)
I just want very low latency
When latency is all that you care about and you are happy with using under 25% of the compute a core provides, we got you covered.
12 cores
This enables very fast transaction confirmations with 500ms blocks and up to 12 MB/s of DA bandwidth. Fancy!
I want high throughput (TPS) and lower latency.
If you are building a CPU intensive application, then your rollup needs to maximise the compute usage of the cores while also achieving a lower latency.
3 cores.
You get a good balance of latency and throughput. This gives the rollup up to 6s of execution, 3MB/s of DA bandwidth and neat block time of just 2 seconds.
I want decent throughput but more bandwidth
It might happen that your application doesn’t really need to use much compute, let’s say under 50% of what a core gives. But, at same time it is a bandwidth guzzler.
6 cores.
This would give you up to 6s of compute , 6MB/s of DA bandwidth, but you will also get 1 second blocks for free.
Multiple blocks per PoV
Using 12 cores just to get 500ms blocks might not be the best idea in terms of resource usage efficiency long term, but it just works nicely until we make it more efficient.
Something better is coming and it will allow up to 4 parachain blocks (or even 8 for 250ms blocks) to be included in a single PoV. It will then be possible to reduce the number of required cores to 3 while maintaining the 500ms latency and maxing out the compute usage of the core. I do not have a timeline for this one and cannot promise anything, but I know that @bkchr is doing it
For an even better picture of the scalability story of Polkadot I recommend you read this blog post
Trade-offs
It is always the case that we need to trade-off compute for latency. Lower latency means more frequent blocks and this adds overhead, reducing the amount of compute that can be performed in 6 seconds.
At the same time the amount of compute that can be used per core depends on the network latency between your collators and their authoring speed. Faster collators allow you to use the relay chain compute resources to the fullest.
At Parity we are working hard to streamline and optimise the block production to enable maximum core resource usage for collators running reference hardware. Ideas, discussions and even a PoC exists. Read more about it in this ticket: Elastic Scaling: Streamlined Block Production · Issue #5190 · paritytech/polkadot-sdk · GitHub
How does all of this sound ? Are you going to use elastic scaling ?
I would love to hear some feedback and answer questions in this thread.