Elastic Scaling - wen 500ms blocks

The Elastic Scaling launch is right around the Q2 corner. There are just a few final pieces of the puzzle that need to come together:

  • The Fellowship Runtime based on Polkadot SDK 2412-1 is being prepared. Once enacted, we are just one referenda away from enabling the RFC103 security feature.
  • Polkadot Stable release 2503 is coming at end of March. It contains the slot based collator technology that supports elastic scaling and dynamic async backing parameters for the validators.

Elastic Scaling is the last missing piece of Polkadot 2.0 and at the same time is a game changer. In a nutshell it significantly improves the vertical scalability of rollups, increasing throughput and lowering latency.

So, what can elastic scaling do for you ?

It simply enables single rollups to leverage the multi-core architecture of Polkadot. The rollup (parachain) can adjust the number of cores it uses on the fly via the Agile Coretime interface. L2s can increase their throughput and decrease latency to match the load or expected end user experience.

The picture below shows how it works.

There is just one rule: the rollup still creates a chain of blocks that get validated in parallel on the relay chain. The faster your rollup can create these blocks, the more cores you can use on the relay chain every 6s. Remember, each core has 2s of execution and 5MB of availability (soon 10MB).

When do you need it ?

Depending on what you are building you will care about getting more of the following 3 things:

  • compute (cpu weight)
  • bandwidth (proof size)
  • latency (block time)

I just want very low latency

When latency is all that you care about and you are happy with using under 25% of the compute a core provides, we got you covered.

12 cores

This enables very fast transaction confirmations with 500ms blocks and up to 12 MB/s of DA bandwidth. Fancy!

I want high throughput (TPS) and lower latency.

If you are building a CPU intensive application, then your rollup needs to maximise the compute usage of the cores while also achieving a lower latency.

3 cores.

You get a good balance of latency and throughput. This gives the rollup up to 6s of execution, 3MB/s of DA bandwidth and neat block time of just 2 seconds.

I want decent throughput but more bandwidth

It might happen that your application doesn’t really need to use much compute, let’s say under 50% of what a core gives. But, at same time it is a bandwidth guzzler.

6 cores.

This would give you up to 6s of compute , 6MB/s of DA bandwidth, but you will also get 1 second blocks for free.

Multiple blocks per PoV

Using 12 cores just to get 500ms blocks might not be the best idea in terms of resource usage efficiency long term, but it just works nicely until we make it more efficient.

Something better is coming and it will allow up to 4 parachain blocks (or even 8 for 250ms blocks) to be included in a single PoV. It will then be possible to reduce the number of required cores to 3 while maintaining the 500ms latency and maxing out the compute usage of the core. I do not have a timeline for this one and cannot promise anything, but I know that @bkchr is doing it :slight_smile:

For an even better picture of the scalability story of Polkadot I recommend you read this blog post

Trade-offs

It is always the case that we need to trade-off compute for latency. Lower latency means more frequent blocks and this adds overhead, reducing the amount of compute that can be performed in 6 seconds.

At the same time the amount of compute that can be used per core depends on the network latency between your collators and their authoring speed. Faster collators allow you to use the relay chain compute resources to the fullest.

At Parity we are working hard to streamline and optimise the block production to enable maximum core resource usage for collators running reference hardware. Ideas, discussions and even a PoC exists. Read more about it in this ticket: Elastic Scaling: Streamlined Block Production · Issue #5190 · paritytech/polkadot-sdk · GitHub

How does all of this sound ? Are you going to use elastic scaling ?

I would love to hear some feedback and answer questions in this thread.

20 Likes

This part is super helpful, thank you!

5 Likes

Once the referenda Set PoV size limit to 10 Mb is enacted on Polkadot, each core will provide up to 1.8 MB/s of DA bandwidth.

Elastic Scaling Update - August

Hi everyone!

Following the successful enactment of referendum #569 and a week of flawless operation, we are thrilled to announce that RFC103 has been successfully deployed on Kusama. This is a significant achievement that we can all be proud of. :tada:

This milestone means parachains can now securely utilize multiple cores, boosting throughput and reducing block times.

Why this matters

Elastic Scaling is the final missing piece of Polkadot 2.0. With it, rollups (parachains) can dynamically scale up their compute power using the Agile Coretime interface, tapping into Polkadot’s multi-core architecture.

In practice, this means:

:high_voltage: Higher throughput - more transactions per second without sacrificing security or decentralization

:stopwatch: Lower latency - rollups can hit 500ms block times

:satellite_antenna: Greater bandwidth - up to 20 MB/s of data availability when scaling to 12 cores

:hammer_and_wrench: Flexibility - parachains adjust coretime on demand, scaling up or down to match workload and use case.

Whether you’re building gaming dApps that need low latency, DeFi protocols that demand high throughput, or social platforms that rely on bandwidth-heavy messaging, Elastic Scaling unlocks new possibilities.

Storm Tested, Future Ready

While the Q2 RFC103 deployment faced challenges, particularly during the Kusama dispute storm, we’ve emerged stronger. The postmortem of this event revealed critical issues in the dispute protocol, which we’ve since resolved. This has led to a more resilient, stable, and future-proof system.

What’s next

With RFC103 live on Kusama and validated under real conditions, we are moving closer to a Polkadot mainnet release, scheduled for early September, with an exact date to be released once the referendum is submitted. This rollout will mark the beginning of a new era for developers and users across the ecosystem:

:rocket: Apps with faster, smoother UX

:puzzle_piece: Rollups that scale seamlessly with demand

:globe_showing_europe_africa: A Polkadot network ready to support global-scale applications

Elastic Scaling isn’t just an upgrade - it’s a game changer for how blockspace is used and delivered.

We’re eager to hear your thoughts. What are you most excited to build with Elastic Scaling? How will it impact your work?

10 Likes

wen faster blocktimes on kusama hub?

As far as I am aware it will happen post migration for both KAH and PAH.

4 Likes

:construction: Let’s clear up where Elastic Scaling and Polkadot 2.0 stand today.

To clarify: Elastic Scaling is live on the Polkadot Relay Chain. Production deployments require that collators and parachain runtimes are upgraded to Polkadot SDK version 2509, which is scheduled for release in early October.

But, 2509 release candidate is available :backhand_index_pointing_right: https://github.com/paritytech/polkadot-sdk/releases/tag/polkadot-stable2509-rc1

For more information please see the guide to enable elastic scaling: https://paritytech.github.io/polkadot-sdk/master/polkadot_sdk_docs/guides/enable_elastic_scaling/index.html

5 Likes

Do we know when collators and parachain runtimes will upgrade to the Polkadot SDK version 2509?

On that. But probably the release would be after the Polkadot Hub Migration.

2 Likes

We are already working to upgrade AH to ES on testnets (Westend, Paseo), then Kusama and Polkadot after the migration.

1 Like

Elastic Scaling is Live on Polkadot

Elastic Scaling is now complete and available for parachain teams on Polkadot.

Elastic Scaling introduces vertical scalability for rollups, allowing parachain teams to handle growing demand with greater efficiency. For the first time, they can deliver a user experience that is much closer to Web2, characterized by speed, responsiveness, and smoothness, with shorter in-block confirmation delays.

This milestone marks the completion of the entire launch checklist, encompassing testnet deployments, runtime upgrades, and referenda across Kusama and Polkadot.

What is Elastic Scaling?

Elastic Scaling allows parachains to dynamically utilize multiple cores simultaneously, unlocking the potential for significant throughput increases. Rollups can now scale vertically to meet demand instead of being limited to a single core. Polkadot’s upgraded architecture is designed for workloads that rely on:

  • Low-latency execution and high throughput

  • Consistent performance under heavy demand

  • Real-time user interaction and responsiveness

A Simple Analogy: Coffee and Finality

Think about buying a coffee in a café.

  • You tap your wallet, and the payment is initiated.

  • The merchant doesn’t wait for the funds to settle in their account before serving you the coffee.

  • They trust that the payment will be finalized, and life will move forward smoothly.

  • The customer gets their coffee without delay.

With Elastic Scaling, parachians can significantly reduce block time, improving the speed at which transactions are confirmed with each block. In the cafe analogy, it’s like serving customers faster without altering the order verification process, ensuring a smoother, more responsive experience while the system still guarantees eventual certainty.

Completing the Polkadot Upgrade

Elastic Scaling is the final piece of Polkadot’s three-pillar scalability upgrade, alongside:

  • Async Backing: reduced parachain block time to 6 seconds and increased block size, unlocking an 8–10x throughput boost.

  • Agile Coretime: replaced auctions with an on-demand marketplace for blockspace, making it easier and cheaper for parachains to access cores.

Together, these upgrades make Polkadot rollups up to 3× more performant on average, with up to 20MB/s bandwidth availability, delivering a user experience very close to Web2.

Important to Note

Elastic Scaling is not a mandatory upgrade; it depends on your application architecture and the service you want to provide your customers. Some parachains may not need multiple cores, while others may see it as a game-changer for their growth and user experience. At the same time, developers are invited to come and work with the new stack and build on Polkadot.

What should parachain teams do next?

  • Review the updated Elastic Scaling Guide to understand how to enable it.

  • Make sure to test thoroughly before going live.

  • Evaluate whether your chain benefits from multi-core scaling based on your workload and user demand.

  • Share feedback on this forum post from your deployments so we can continue refining the ecosystem.

Elastic Scaling completes the trilogy of Polkadot upgrades, delivering the scalability properties required to merge Web2 speed with Web3 truth. This is a significant step forward in Polkadot’s evolution, bringing us closer to a truly scalable, application-rich network.

We’re excited to see how parachain teams will utilize Elastic Scaling.

12 Likes

Wen 500ms blocks? not yet it seems :pleading_face: … I’ll share some experiences we’ve had, please use us as guinea pigs to figure out the hidden art of 12 core 500ms block production beyond lab demos :wink:

With Kusama offering cheap abundant cores and our first use case not live on production just yet we decided to give Elastic Scaling’s top configuration a try on the live net Kreivo, nothing like testing on production. First we proceeded to upgrade our infrastructure to have good direct-on-metal collators, then upgrading the runtime with the relevant configurations (see also). Things started promising but we soon started seeing forks that degrade the experience completely(something we already experienced in the testnet with less cores), no matter what we try things never improve, if anything suggestions usually make things worse :pensive_face:

Forks have been the main issue, perhaps the best results were with a single collator connecting to the relay chain via RPC, there were bursts of fast blocks followed by a long block. The video in the tweet shows the last tweaks with the relay parent offset set to 2 since 1 didn’t do anything and was the only fork related tweak mentioned in the guide, we also changed the collators to use a normal relay chain node suspecting that RPC connection could be an issue but it just got worse. The video shows a balance transfer that gets reincluded over and over until it fails because the signature already expires.

What should we try next? by the time people read this it might be that a new session already started on-boarding 4 new collators, will the added collators improve or degrade the forking issues? follow Kreivo’s block production on dev.papi.how to know more :thinking:

1 Like

Hi @olanod ! Thank you for providing this feedback. I looked at the data you shared and it looks really bad.

I don’t think this is a good idea, at least until we increase the maximum relay parent age to more than 3 blocks. With an offset of two, your blocks have a very high chance of not getting backed on-chain because the relay parent becomes too old (already 2 blocks old at authoring time).

As a next step please create an issue on Github and describe the setup you have and share logs that you get from running the collators with: -lparachain=debug,parachain::collator-protocol=trace,aura=debug,aura::cumulus=trace,basic-authorship=debug

I am looking forward to investigate these issues :slight_smile:

5 Likes

We have some logs for that file. This is some part of the logs, I think can help understanding the issues we’re facing.

For that turn, such collator didn’t produce blocks.

...
aura::cumulus: [Parachain] Using cached data for relay parent. relay_parent=0xc1d1…9bb8
aura::cumulus: [Parachain] Going to claim core relay_parent=0xc1d1e06e7884b8954eab9cc7db6cfcd78029f7965cbaed5ba399e074dc839bb8 core_selector=CoreSelector(1) claim_queue_offset=ClaimQueueOffset(1)
aura::cumulus: [Parachain] Using cached data for relay parent. relay_parent=0xc1d1…9bb8
aura::cumulus: [Parachain] Not building block. unincluded_segment_len=15 relay_parent=0xc1d1e06e7884b8954eab9cc7db6cfcd78029f7965cbaed5ba399e074dc839bb8 relay_parent_num=30955140 included_hash=0xf16d258742eb1d7434ceee84a7b932c7d6556e6ce2e5553c4303e44023fbcc72 included_num=4030439 parent=0x3e27f266b1a3aca11ab85777ed8c22632743fcc203685c626c068f6d475e1c2f slot=Slot(293845727)
aura::cumulus: [Parachain] Expected to produce for 12 cores but only have 1 slots. Attempting to produce multiple blocks per slot. block_production_interval=500ms
aura::cumulus: [Parachain] New block production opportunity. slot_duration=SlotDuration(6000) aura_slot=Slot(293845729)
aura::cumulus: [Parachain] Relay chain best block changed, fetching new data from relay chain. relay_parent=0x2b19…dbdf
parachain::collator-protocol::stats: [Parachain] Collation included on relay chain latency=1 relay_block=0x2b19c1041be629b825a5abf4687cb27abac5fa49f3ee3ff7d88e1a50f977dbdf relay_parent=0xcc4b65fb8997e8bcb5213c5dd66278e17a76b0d8abde4a147d2c04b10f9d3347 para_id=Id(2281) head=0x7e4879de9343a49e7b9fef3b471473a494ad60624b133ee6e1305c949be993d6
parachain::collator-protocol::stats: [Parachain] Included collation not found in tracker head=0x385c215d3f18ea48deea452644bba3a636c2940ddfa954c2d53ad1e839f975fb
parachain::collator-protocol::stats: [Parachain] Backed collation not found in tracker head=0x9fe9e0ef459032660451705cf4a532333c0b5d266451e24c10bed818510ba0a9
parachain::collator-protocol::stats: [Parachain] Backed collation not found in tracker head=0x3033ebb38bad1234cc2a3231099a11ea908d03d9a58979e9aedea25ca98d310f
parachain::collator-protocol::stats: [Parachain] Backed collation not found in tracker head=0x5430fefbe0b8315ca2335b5eed929e3b385e3ec183ecb44677938f51e6a962ea
parachain::collator-protocol::stats: [Parachain] Backed collation not found in tracker head=0x90443c2be1ec478d8e4e6d6a7d49a61b495b22fc687b091107ba6076427aba12
parachain::collator-protocol::stats: [Parachain] Backed collation not found in tracker head=0xdb717c7170ba0ab3822e542bf3d7a47ddf844f1e7e0241be2de488688a2e415b
parachain::collator-protocol::stats: [Parachain] Backed collation not found in tracker head=0x3a3c351ed905a63ff9cfd6d35700ec8329ebf0bba18324b6e5d8bcbb43869b56
parachain::collator-protocol::stats: [Parachain] Backed collation not found in tracker head=0x01da90d0f2629a293c8ba5f66c3507b6fa3864e6e660f3a74ccc5a405b44f983
parachain::collator-protocol::stats: [Parachain] Backed collation not found in tracker head=0xcaa5530dbc7184e61df9fc15423539dcc4831c8c92d718867af91252915a6e05
parachain::collator-protocol::stats: [Parachain] Backed collation not found in tracker head=0xf550b70f5c8e04dff248fbbaccec1ef3444794914946a392e25c3732808ff9e2
parachain::collator-protocol::stats: [Parachain] Backed collation not found in tracker head=0x5f744689301b9d04ba912fe6883f4aba3d6f66947b0598a5b8cd94e7ec754ade
parachain::collator-protocol::stats: [Parachain] Backed collation not found in tracker head=0x702443772e9f4335f24d6cadefefe75918df224287e7d53c48fe157acb41db98
aura::cumulus: [Parachain] Using cached data for relay parent. relay_parent=0xb153…7ecb
aura::cumulus: [Parachain] Relay parent descendants. relay_parent_hash=0xb153…7ecb relay_parent_num=30955141 num_descendants=1
aura::cumulus: [Parachain] Parachain slot adjusted to relay chain. timestamp=Timestamp(1763074368000) slot=Slot(293845728)
aura::cumulus: [Parachain] Using cached data for relay parent. relay_parent=0xb153…7ecb
aura::cumulus: [Parachain] Going to claim core relay_parent=0xb153b60ea70b8fbc4cf1c193223255edffe1cf95d571eb0e049044c6abf97ecb core_selector=CoreSelector(0) claim_queue_offset=ClaimQueueOffset(1)
aura::cumulus: [Parachain] Using cached data for relay parent. relay_parent=0xb153…7ecb
aura::cumulus: [Parachain] Not building block. unincluded_segment_len=14 relay_parent=0xb153b60ea70b8fbc4cf1c193223255edffe1cf95d571eb0e049044c6abf97ecb relay_parent_num=30955141 included_hash=0x8db3c4ed5eef9d8fb9ebb94a3ac1750cd18c0ce24eed014a866da18d2d2e5282 included_num=4030440 parent=0x3e27f266b1a3aca11ab85777ed8c22632743fcc203685c626c068f6d475e1c2f slot=Slot(293845728)
aura::cumulus: [Parachain] Expected to produce for 12 cores but only have 1 slots. Attempting to produce multiple blocks per slot. block_production_interval=500ms
minimal-polkadot-node: [Parachain] Received finalized block via RPC: #30955140 (0xe521…29a5 -> 0xc1d1…9bb8)
aura::cumulus: [Parachain] New block production opportunity. slot_duration=SlotDuration(6000) aura_slot=Slot(293845729)
aura::cumulus: [Parachain] Using cached data for relay parent. relay_parent=0x2b19…dbdf
aura::cumulus: [Parachain] Using cached data for relay parent. relay_parent=0xb153…7ecb
aura::cumulus: [Parachain] Relay parent descendants. relay_parent_hash=0xb153…7ecb relay_parent_num=30955141 num_descendants=1
aura::cumulus: [Parachain] Parachain slot adjusted to relay chain. timestamp=Timestamp(1763074368000) slot=Slot(293845728)
sync: [Parachain] 💔 Error importing block 0x4a722cbdf118bbd66f9b526dc93f156555432fce03d849851b44bc517d8b5dc0: block has an unknown parent
parachain-system: [Parachain] Validating header #30955141 (0xb153b60ea70b8fbc4cf1c193223255edffe1cf95d571eb0e049044c6abf97ecb)
parachain-system: [Parachain] Validated header #30955141 (0xb153b60ea70b8fbc4cf1c193223255edffe1cf95d571eb0e049044c6abf97ecb)
parachain-system: [Parachain] Validating header #30955142 (0x2b19c1041be629b825a5abf4687cb27abac5fa49f3ee3ff7d88e1a50f977dbdf)
parachain-system: [Parachain] Validated header #30955142 (0x2b19c1041be629b825a5abf4687cb27abac5fa49f3ee3ff7d88e1a50f977dbdf)
...

I have noticed that both my Kusama validators (located in LATAM) have been missing many votes (sometimes upwards of 30%) specifically with the Kreivo parachain and the timeline seems to correspond to when they upgraded to using 12 cores.

I am attaching some screenshots of several validators missing votes, this has impacted at times the performance of my validators lowering from it’s usual A+.

@sandreim do you expect this to be related to latency? Or do you have any clues so I can further investigate on my side? Thanks

1 Like

I already asked for some logs from the collators. But otherwise it is expected to get less votes if the parachain blocks your validators are backing are not posted on-chain. One reason it happens is because some ancestor block was not backed and then it’s descendants cannot be backed as well.

1 Like