Unifying bulk and mid-term blockspace with strided regions

In October, I wrote a post (Blockspace Products and Mechanisms in Polkadot) proposing a few different mechanisms looking forward.

UPDATE: This is now an RFC at Blockspace Regions by rphmeier · Pull Request #3 · polkadot-fellows/RFCs · GitHub

Summing up a few key desiderata:

  • Bulk blockspace markets (parachain slot auctions)
  • Mid-term (1hr - 7wk) purchases
  • On-demand purchases (design writeup here: On-demand parachains)
  • Secondary blockspace markets (re-selling of blockspace, blockspace futures markets, etc.)
  • Sharing bulk blockspace purchases across several chains
  • Elastic scaling for parachains by purchasing parallel blockspace with some combination of the above (Parachain Scaling by Parablock splitting)

In particular, I see mid-term blockspace markets as a key mechanism for practical elastic scaling. Periods of peak demand are often short, so it’s the right mechanism for adapting to predictable-yet-short-lived load.

It’s not clear to me that bulk and mid-term blockspace actually require different mechanisms. Blockspace can enter the system in a bulk format, and markets living elsewhere can handle most of the work of assigning the bulk blockspace onto different chains.

Consider this post an early-stage proposal for new mechanisms to unite both bulk and mid-term blockspace while looking forward to elastic scaling and secondary blockspace markets.

Blockspace Region Subdivisions

Let’s imagine a mechanism that works like this:

  • The Relay Chain auctions off a bulk piece of blockspace, i.e. a dedicated execution core for 6 months. This creates a contiguous blockspace region

  • Every region is a [start, end, stride, offset] data structure. For a region with full access to a core, the stride is 0. For a region that has intermittent access to a core, the stride is nonzero.

  • The owner of a region can split it into pieces down to some minimum granularity in size and maximum stride. Every split comes with a corresponding deposit for the on-chain bookkeeping overhead.

  • Regions can be split in two ways: contiguously or strided. A contiguous split only affects the start and end, while a strided split alters the stride and offset.

  • Regions can be transferred by the owner.

  • Regions have a single ParaId assigned to them, which can be changed by the owner.

  • Regions can be recombined when they correspond to the same core and together form a well-formed region

  • The Relay Chain’s scheduler looks at the next upcoming regions for each core to determine the upcoming ClaimQueue that collators and validators watch.

Re-selling blockspace

A parachain looking to re-sell blockspace can simply split off a region and sell it (in its own runtime, or on a dedicated parachain for this purpose). This sale can be made atomic, and can be performed using XCM, and would allow parachains to monetize unneeded blockspace.

Bulk Slot Sharing

Strided regions would allow multiple chains to share a bulk slot in even or uneven proportions.

Elastic Scaling

This depends on other mechanisms for supporting chains to occupy multiple cores at the same time, but once that is completed, a parachain looking to scale in the near future would be able to purchase regions corresponding to one or multiple cores

A dedicated market for deferred fragmentation

One clear issue with this approach is that regions can become heavily fragmented and even though they can be combined, this could carry high overheads.

What makes most sense is to defer the splitting of regions to the latest point possible (i.e. delivery) by having a dedicated market on a parachain to trade potential regions which then finally resolve shortly before they are relevant, by issuing the appropriate parachain assignment and region splitting calls on the relay chain via XCM.

This is a notable project for R&D that should only rely on the right interfaces being set at the relay chain level.

Strides and Availability Delays

The main issue with scheduling on execution cores is that cores are occupied from the time a parablock is backed to the time that parablock’s data is available. Availability is dependent on the entire validator set, which means that blocks can occupy an availability core for a fairly long time, although the size of the data is bounded.

If we interpret strides as “turns” on an execution core, then we may need mechanisms for either truncating or extending regions as some turns may take longer than expected due to availability delays. Truncating regions has the effect that regions may not actually be as long as initially expected in the face of availability delays, whereas extending regions means that the point in time when the region is concluded is not as predictable. This is worth discussing further.

12 Likes

I’d been struggling to properly grok scheduling and how the blockspace market could work. This and the earlier posts [1] [2] were very helpful.

They coalesced my understanding and I think I’ve had my light bulb moment, the cloud/AWS analogy helped cement it.

I have a few thoughts / questions, apologies if this comes across as too scatter-brained:

  • I love the concept of mid-term auctions but wonder then what the point is of parathreads? If a parathread is a short term “spin up” ephemeral parachain and a user can purchase the same effect through a mid-term auction - what’s the difference?
  • You talk about the existing bulk blockspace auctions being the “seeding” market for execution cores/regions and that those can then be “resold” at will by parachain owners to form the mid-term market. What about the execution cores that are to be allocated to parathreads, how will they interface with this?
  • The market concept reminds me of ethereum gas tokens where users “purchased” gas by exploiting the gas refund mechanism of ethereum. This led to a market in gas/blockspace which has echoes of this. However, I’m conscious that was ultimately removed from Ethereum and considered harmful. Are there any market force parallels here that we should be wary of? E.g. is it desirable for blockspace hoarders to come along and buy up swathes of future space or would that be an afront? You could argue they become market makers, but it feels a bit icky.
  • This thread seems to suggest that mid-term auctions would actually be “bought” as opposed to the lease mechanism currently in place for bulk auctions. Would this be in DOT or is it the region holders prerogative as to what they sell them for?
  • If the execution cores for parathreads end up within this mid-term auction space, is there an opportunity to “burn” the DOT they generate and counter some of Polkadot’s inflationary rewards?
  • If the most successful parachains become the primary purchasers of blockspace initially to sustain their level of growth, do we risk a “winner takes all” dynamic where Polkadot gravitates towards serving a majority parachain which operates most of the execution cores at any given time and prices new projects out of the market?

Really looking forward to seeing this develop. It does feel like a break from the mould and creation of something new. Like you say… “the most effective blockspace producer in the world.”

1 Like

the point is of parathreads? If a parathread is a short term “spin up” ephemeral parachain and a user can purchase the same effect through a mid-term auction - what’s the difference?

Parathreads (or as I call them, on-demand parachains) are still useful for intermittent block authoring and serve as a useful on-ramp. Use-cases that make 1 block every 10 minutes, for example, like early-stage pre-PMF products, might not need more than this. I imagine that on-demand orders would have dedicated execution cores. It’s useful as an on-ramp but not as efficient for scaling throughput as products grow.

The market concept reminds me of ethereum gas tokens where users “purchased” gas by exploiting the gas refund mechanism of ethereum. This led to a market in gas/blockspace which has echoes of this. However, I’m conscious that was ultimately removed from Ethereum and considered harmful

GasToken was definitely on my mind while writing about a dedicated blockspace futures market. I do worry about hoarders cornering the market, although hopefully the sheer volume of blockspace would prevent them from doing so. Needs some more examination

This thread seems to suggest that mid-term auctions would actually be “bought” as opposed to the lease mechanism currently in place for bulk auctions. Would this be in DOT or is it the region holders prerogative as to what they sell them for?

With this proposal, it’d be the region holders’ choice how to split and re-sell bulk regions into smaller ones.

If the execution cores for parathreads end up within this mid-term auction space, is there an opportunity to “burn” the DOT they generate and counter some of Polkadot’s inflationary rewards?

I’d propose that on-demand cores be separated economically from the “bulk” cores, and although off-topic, I do think that some amount of burning is reasonable for on-demand order fees.

If the most successful parachains become the primary purchasers of blockspace initially to sustain their level of growth, do we risk a “winner takes all” dynamic where Polkadot gravitates towards serving a majority parachain which operates most of the execution cores at any given time and prices new projects out of the market?

I suspect not, at least if there are enough cores to service demand. If we assume that each core provides 1,000tx/s of throughput, with 100 cores a chain would have to consume 100,000tx/s in a sustained way in order to economically consume all the blockspace of Polkadot. For reference, Twitter experiences 6,000 tweets per second and Google processes around 100,000 searches per second.

I do think it’s likely that in any case where demand > supply that new projects would be pushed outwards to the frontiers of cheaper blockspace, regardless of whether that demand is driven by a single or several main consumers. Scaling further will be important in that respect. I’d also expect that at least some of the main consumers of blockspace would be smart contract platforms, which also provide on-ramps for new projects to enter the system.

2 Likes

Thanks Rob, some really interesting responses and definitely helped my thinking, greatly appreciate you taking the time to respond so thoughtfully.

I now get the on-demand nature of parathreads vs continuous (but less frequent) nature of mid-term blockspace. That is a useful distinction worth maintaining for different use cases.

Agree on the hoarding point, I’m eager to see other’s thoughts on the topic. Early on in the launch of scheduling, I doubt that it will be apparent to many the potential value of blockspace so people could pick up swathes for next to nothing. I guess a key limiter here will be the terms available, if you can only buy a “future” 1 week or 1 month in advance then the impact long term is muted.

On the topic of it being the holder’s choice as to how they sell their blockspace, I wonder if we could mandate a percentage of the sale that is burned or similar. The reason I say this is that they are selling something that they acquired from the network for “free” (well staking opportunity cost, but close to free). They should give back to the ecosystem, else suddenly per my point above, parachain auctions will become a squatters paradise and wouldn’t that be sad. People acquiring them with no desire to ever actually launch a parachain, but rather for the resale value when chunked up into regions.

Understood on the question of scale, but I suspect if Polkadot reaches the scale we all hope, we will bump into this sooner than we expect. There is always Kusama short term though!

Thanks again and I’m looking forward to reading the communities thoughts.

Can’t wait to see this implemented :grinning:

1 Like

Yeah, this is the key difference between this type of blockspace futures vs. GasToken. GasToken, until it was made obsolete, had an indefinite time horizon. If Polkadot produces blockspace with a 6 month or 1 year time horizon, that definitely changes any valuation calculation.

Still, I think that premature financialization of blockspace is probably not a good idea and this is more of an “endgame” than something to build out right away.

As far as how we get there, I’d propose to incrementally roll out this mechanism or something like it, where initially some blockspace would just allocated by governance to a “mid-term blockspace” pallet that parachains can use to buy “boosts” based on a market price.

Here’s some more in-depth design about blockspace regions.

Solving these problems:

  • Parachains sharing access to cores without depending on the others to continue to make blocks
  • Parachains making up for “missed” blocks (via asynchronous backing)
  • Parachains being assigned to multiple cores at the same time (elastic scaling)

On-chain Buffering of Blocks Pending Inclusion

The main insight and modification that allows this design is that there is no detriment to allowing more parachain candidates to be backed in a single block than there are cores. This has two caveats. First, there should be limits on how many blocks individual parachains can have backed. Second, there should be limits on how many parachain blocks can be backed in a single relay chain block. When blocks from a parachain are missed, the chain should have the opportunity to make up for the missed block later. But this has to be limited - if we didn’t get any parachain blocks until the last possible moment, we wouldn’t handle a single relay chain block with hundreds of thousands of candidates very well. But going to some multiple like 1.5x of the number of cores is fine. Over the course of a session, the missed blocks will balance out.

Approval-checking requires that at most one block is included for every execution core. This implies that when handling “missed” blocks and accepting multiple blocks per core, we need additional cores to handle the overflow.

Regions

Each core will have some region creation mechanism. This mechanism is responsible for not creating more regions than the core can handle, i.e. never allocating regions such that the core would need to handle more than one parachain block per 6 seconds. Parachain slot auctions or the on-demand parachain orderbook are examples of region allocation mechanisms.

Every region has a frequency and a TTL, which can be used to calculate the capacity of the region in blocks. e.g. a region with a frequency of 2 is expected to be able to accept a block every 2 relay chain blocks. If that region had a TTL of 10 blocks, its capacity would be 5. Regions carry an explicit capacity, and the actual capacity of the region is the minimum of the possible capacity and the explicit capacity.

Each region is assigned to a specific core and cannot move across cores. Regions can be split in two ways: contiguous splits and frequency splits. A contiguous split produces regions with the same frequency of blocks but nonoverlapping lifetimes. A frequency split produces regions with the same lifespan, but different frequencies, such that the total capacity of the split is less than or equal to the capacity of the region initially.

Frequency splits are what I referred to as “strided” splits previously.

Regions would be something like this:

struct Region<T: Metadata> {
    owner: ParaId,
    core: CoreIndex,
    start: BlockNumber,
    ttl: BlockNumber, // includes the start block

    produced: u32,
    capacity: u32,
    frequency: BlockNumber,

    metadata: Option<T>, // generic metadata for attaching more constraints onto regions
}

The expected amount of blocks per parachain is (now - start) / frequency. A region is in surplus if expected > produced. A block can only be backed when a region is in surplus, and then only according to the region ordering rules defined below. A region expires at start + ttl, regardless of how many blocks were produced. When a region expires, no more blocks can be backed. The threshold of a region is start + (produced * frequency).

Every parachain can have up to some number of regions assigned to it, and these regions may exist on different cores.

Regions and Scheduling

There is an ordering function for regions we can use to determine which region the next parachain block goes to. This ordering function sorts, filters, and repeats all the regions assigned to a parachain. In practice, it should be calculated lazily.

The ordering changes only when regions expire, blocks are backed, or the relay-chain block number advances. A block for a parachain can be backed only when the first region in the ordering is in surplus. It is legal to back i blocks, when there are i sequential regions in the ordering (although the regions may be the same, due to the fact that the ordering can repeat regions).

The ordering function is implemented like this: it repeatedly orders the regions assigned to the parachain by threshold, with produced incrementing by 1 each time they appear in the ordering, and regions are only included if threshold <= now and produced < capacity.

(note: in order to prevent parachains from not producing any blocks until the end and then saturating all relay-chain blocks with extra candidates, it may be desirable to set a reasonable maximum on how far behind capacity a region is effectively allowed to be. this could be done, for instance, by reducing the explicit capacity or increasing produced more rapidly when behind)

This ordering gives us a highly-desirable stability property: parachain blocks at a particular height in the parachain stay stably assigned to the same region and therefore the same core, even as the relay-chain block advances. New relay chain blocks put more regions into surplus, but those are still ordered after the regions which were already in surplus.

We don’t need to worry about ordering of regions per core, at least in the runtime, because:

  • Regions cannot move across cores
  • The initial regions assigned to a core never exceed the capacity of the core.
  • Regions never enter into “debt”, i.e. they can never produce more blocks than they are allowed to

This means that no parachain can crowd out another on the same core indefinitely, and can only crowd out other parachains to the extent that they are in surplus.

That said, regions only give stable parachain block frequencies in the long term, not in the short term. For that reason, we might also maintain a region mapping per core or create economic incentives for parachain block authors to prove that they have provided useful backed candidates.

Nodes will need a mapping from cores to upcoming regions, at least to get a sense of which parachains they should be working on as validators. This implies that we need runtime APIs for nodes to gather information about regions assigned to cores as well as regions assigned to parachains.

We should limit the amount of regions that are assignable to any particular parachain, as it increases the ordering overhead. Ordering is used in these places:

  • The ProvideInherent implementation to provide backed candidates to the chain such that they will all be accepted.
  • The node-side code of validators for determining which cores are assigned to which blocks (i.e. at which depth)
  • The node-side code of collators for the same
  • The block import logic (this is the only place calculating ordering is expensive)

The region metadata is left generic to support things like required collators.

Regions and on-demand parachains

While writing this, I realized that On-demand parachains can be implemented with this same regions approach. The on-demand orderbook is a region allocation mechanism, which creates and assigns regions with a single block capacity, a stride of 1, and an expiry time of a few blocks.

On-demand parachains could be implemented with a mechanism for creating and assigning regions with a single block capacity, stride ‘1’, metadata that limits the required collator, and an expiry time of a few blocks.

This also cleanly supports the “retries” problem for on-demand - because regions don’t require parachain blocks to be backed in a specific relay-chain block, the size of the region is the effective amount of “retries” the parachain gets. Asynchronous backing makes this clean.

It also allows for stacking on-demand orders, even across different cores and collators, just by giving them different start blocks.

1 Like

I am not sure where this catching up requirement is coming from. With asynchronous backing we should barely miss a block in the first place.

This should also not be a problem already. Let’s assume the simple case of two parachains sharing a core: If the parachain A misses providing a collation in time, that block will not contain a collation and the parachain lost its opportunity. Next block it is the turn of the parachain B. There is no waiting as implemented right now. Well at least not more than they signed up for - they are sharing a core, so they will always wait for one block, but that is independent of what the other parachain is doing.

Again, given that with asynchronous backing parachains will be able to produce blocks in advance, I don’t see why we should need to support retries*).

*) We do have retries in case availability times out, but this is not “depending on others to make blocks”, but rather “wait on validators to do their job”.

If the parachain A misses providing a collation in time, that block will not contain a collation and the parachain lost its opportunity. Next block it is the turn of the parachain B. There is no waiting as implemented right now.

The main issue with “taking turns” is defining when parachains can take turns. Doing it by relay-chain block number falls apart very quickly when you consider that availability can take an indeterminate amount of time, and could lead to starvation of some chains if the core is never free at their scheduled moments. Doing it by literally taking turns doesn’t work when you consider that one chain could just stall. This problem doesn’t feel possible to solve deterministically. Deterministic scheduling in the presence of random elements is always going to be unpredictable, and that introduces major complications when we try to expand to multiple cores per chain and figure out which group is responsible for backing/availability of parachain blocks at different depths.

It seems much cleaner to just frame things statistically: a parachain gets the right to on average submit one block every two relay-chain blocks, and will never be able to submit more parachain blocks than the region allows for. But you can only attain that “on average” property if you allow variance in both directions. There is variance where sometimes blocks get submitted or made available slower than expected, which we can make up for with variance in the other direction, by allowing blocks to be submitted and made available faster (NB: I initially hoped that we could constrain the availability to still be one block per core, but that doesn’t seem to actually work)

Note that this doesn’t really matter in the case where a parachain has a long-term monopoly on a core, since it is always scheduled. But the lower the frequency a parachain is scheduled at, the more it matters when its opportunity is missed. If a parachain is scheduled at a frequency of 10 and it misses its opportunity, that’s a minute without confirmation. Now, it’s true that when we ‘take turns’ deterministically and if we assume that availability lag is a random variable, all chains should be inhibited from backing at a rate equal to the proportion of the time they are scheduled onto the core. But chains which are scheduled less frequently experience much worse effects from that variance. The proposal here is to smooth out the variance across every relay chain block to limit the downside of missed opportunities.

This is orthogonal to asynchronous backing. Asynchronous backing is about always making sure there is something ready to post on-chain. Regions are about making sure that parachains get enough opportunities to post things on-chain.

This is painting an interesting future for sharing block space. Thanks for explaining!

One thing that I did not explicitly see addressed (but was probably implied somewhere):
How can a secondary blockspace buyer be sure, that the purchased space will be schedulable?
I imagine a situation where someone slices these strides in a way that they would all resolve in shorter period than the relay could provide the requested space. That would push the price of these strides down - unbeknownst to a potential buyer.

Another thing I wondered is, how to realize that register of all the futures. Could we use some NFT approach? It sounds similar to what is required: sendaable, redeemable buy&sellable…

How can a secondary blockspace buyer be sure, that the purchased space will be schedulable?
I imagine a situation where someone slices these strides in a way that they would all resolve in shorter period than the relay could provide the requested space.

I’m interpreting this to mean that someone buying a region on a secondary market might be buying something undeliverable, i.e. their regions would expire before they would be usable. That’s a legitimate concern and something that should be handled by any reasonable marketplace.

Striding itself couldn’t lead to this problem, I believe, but expiry times of regions could.

1 Like

One factor that I didn’t address in my previous post was the limited amount of regions per parachain which can be exposed.

The reason for limiting the number of regions is that we require an ordering, i.e. regions are required to be processed in order. If the on-chain logic does this proactively, we’d likely be eating an O(n^2) cost in the number of regions for a parachain whenever that parachain is processed.

Ideally we should be able to have an unbounded or at least fairly large quantity of regions per parachain - e.g. 24 or 32.

However, we should be able to bring it down to O(n) costs by storing the region information in a linked list that gets lazily updated with information in the inherent supplying backed candidates, which gets populated by the relay-chain block producer.

Still a rough idea but I’ll respond here with more details once I have a design figured out.

Actually, my thinking here has evolved somewhat. The ordering actually doesn’t need to be on-chain at all, which frees us up to provide huge amounts of regions to each parachain.

The on-chain behavior should be that when a parachain candidate is posted to the relay-chain, it is accompanied by the region that it will be part of, and then the on-chain logic will check that region and ensure that it is both assigned to the parachain and is in surplus. And it will check that the block is backed by the validators assigned to the core the region occupies. Each region should have a UUID to make this light.

Then, the Regions pallet only needs to store a simple mapping Regions: RegionId -> Region. The RegionID can be determined in a variety of ways, but the simplest is probably to determine it at creation time Hash(parent_hash ++ region_index) where parent_hash is the parent block hash of the current relay-chain block and region_index refers to the number of regions which have been created in this block.

We have a choice of who determines which region each parachain block is assigned to. Option 1 is to have relay-chain validators (in particular, the relay-chain block author, which packs backable candidates into the block) figure this out. Option 2 is to alter the CandidateReceipt format to explicitly commit to the region ID.

Long-term, I lean towards Option 2, because it opens up more avenues for “region-aware collation protocols” (Cumulus Consensus Modules) and collators don’t have to guess what validators will do.

Short-term, some variation of Option 1 will be easier to implement, but likely won’t scale to large quantities of regions, especially if they change hands often.

Posted RFC for this idea at Blockspace Regions by rphmeier · Pull Request #3 · polkadot-fellows/RFCs · GitHub

2 Likes