This post assumes familiarity with the parachains. Refer to the glossary and this article for the detailed description of the parachain consensus.
A parachain reaches the limits of throughput of a single slot. What options does it have?
There are incoming scaling improvements, such as async backing, which should allow more time for PVF execution. It does not change the problem fundamentally. At some point, even that higher ceiling will be reached. There are other scaling ideas, such as squaring polkadot, which essentially increases the available number of slots.
One may say that the parachain should deploy the second parachain instance. That is, the parachain will acquire another parachain slot, deploy the almost same code there, and subjugate it governance-wise. That might work. In the end, it’s similar to the now classical scaling solution — sharding.
Sharding, however, has downsides. Communication between the shards is limited and suffers from increased latency. The complexity caused by requiring intershard communication can be hidden by introducing complex mechanisms or that complexity could be dumped on the developers’ shoulders.
Here I want to propose an alternative: parablock splitting. The crux of it is to decouple the notion of one parachain one core. Right now, each parachain with a parachain slot can submit only one parachain block candidate for every relay chain block. With the parablock splitting, a parachain would be allowed to submit 2 candidates for a single parablock.
Here’s what it will look like:
A parablock is logically split into two parts. E.g., the first 50 transactions end up in the first part and another 50 in the second. The parablock will have two state roots. The intermediate state root is obtained after executing the first 50 transactions and the post-state root.
During authoring, the collator packs the first 50 transactions in the first candidate. The PoV for that candidate will contain the witnesses required for executing that batch of transactions. Essentially, the candidate will prove that the state transition from the pre-state to the intermediate state is correct.
The very same thing happens with the last 50 transactions. The collator packs the last 50 transactions with the witness required to prove the correctness of state transition from the intermediate state root to the post-state root.
There is nothing special about the number 2. Potentially it can be more than that.
I am not entirely sure it is possible to implement the parablock splitting with the current structure of the parachains host and cumulus (Although lmk if you have an idea), and if it is, then it will be clunky. One example, it’s not clear what to do wrt messaging. XCMP enables channels between a pair of para ids and the parablock splitting will either mess it up or require some workarounds (e.g. sending messages are allowed only in the first candidate).
A better way is to treat a core independently from a parachain slot and allow one parachain to claim two cores within one relay block. Those candidates would be verified in parallel. If any of those candidates fail (e.g., not achieving availability or if PVF fails), then the whole chain of candidates is rolled back. All candidates are executed with the same PVF, the PVF of the parachain. The output of the last’s candidate PVF should write into the new head data. The messaging context is the same for all candidates, i.e., their PVFs can send and receive the messages (ordered by order of execution).
It’s not entirely clear how to reconcile this with the current market of parachain slots and parathread bids. This is also related to the discussion of the exotic para cores scheduling. I can see that reserving the second slot might be too big of a commitment. On the other hand, IIUC, the parathread bidding might not be quick enough to accommodate the changes in demand.