Dynamic Backing Groups: Preparing Cores for Agile Scheduling

This thread is a discussion / pre-RFC topic focusing on the Core primitive and where it causes friction when looking into the future of Coretime allocation.

Cores are an artifact of Polkadot’s architecture. These goals lead us to use cores:

Loosely:

  1. Rate-limiting workload as a result of new inputs to data availability and approval checking
  2. Decorrelating stake behind candidates
  3. Amortizing costs of eager validity proving
  4. Limiting each fork of the relay chain to a single fork of a state machine
  5. Providing predictable accesses to resources to parachains scheduled upon Polkadot

However, the current design of cores at the scheduling layer had only two use-cases in mind:

  • One parachain per core
  • Cores reserved for sporadic on-demand workloads

The new paradigm covers the entire coretime demand distribution, from tiny consumers to enormous ones. Attempts at improving this, such as Coretime Scheduling Regions by rphmeier · Pull Request #3 · polkadot-fellows/RFCs · GitHub and Idea: Core Groups · Issue #7441 · paritytech/polkadot · GitHub have exposed enormous hidden complexity within previous assumptions. Here I break down what cores do for us, in order to propose a necessary set of changes for the future.

What is the current definition of a Core?

A core is a numbered container within the relay chain’s state. This container is either empty or full. When empty, a new candidate may be placed inside it by the relay chain block author. It stays full until that candidate’s data is available or a timeout is reached. The process repeats. Behind the scenes, validators use the core index to determine which candidates to validate in approval checking.

We currently expect that validators are partitioned into backing groups, where each backing group is responsible for a single core, and the candidates backed by that group may only go into this core. This decision was made when we assumed 1 parachain per core, not many parachains per core or many cores per parachain, or regular updates to this scheduling.

Elaborating on desired properties

(1) is related to availability and approval-checking and the fact that candidates are eagerly checked before finality. Overburdening validators with work would lead to potentially unbounded finality delays, which is undesirable. Therefore, the total rate of candidates must be bounded.
Formulation: the total data and execution burden on the network must stay below a set bound, and the individual validators’ share of these burdens should be equal in expectation

(2) is about Polkadot holding nodes accountable for attesting to invalid candidates. If a node expects to be slashed, the marginal cost of submitting additional invalid candidates is zero. That is, it shouldn’t be the same 1 or 2 nodes introducing all candidates to the system. (Gambler’s ruin - we don’t want nodes taking e.g. 500 attempts of getting bad candidates approved for the price of 1).
Formulation: the tokens backing each individual state transition pending finality should be sufficiently decorrelated (we can actually remove this, assuming disablement, according to @burdges)

(3) Each state transition carries some constant overheads, for example in erasure encoding, fetching, decoding, availability attestations, and in particular during approval checking. Amortizing these costs by bundling data packets and validity checking responsibilities for many small state transitions is quite valuable.
Formulation: the constant costs of data availability and state transition verification should be possible to amortize in order to accommodate small state transitions

(4) By limiting each fork of the relay chain to a single fork of each parachain, and utilizing the fork-choice rule of the network to reorganize away from bad candidates, we can process and execute candidates (sending messages & updating head) optimistically prior to finality. This can be done as soon as data availability is concluded, as at this point approval-checking and disputes are possible.
Formulation: any fork of the relay chain must execute at most one fork of each parachain, and should do so immediately after data availability

(5) Scaling resources must be complemented by effective allocation of those resources. Granular markets for data and execution independently is the holy grail here.
Formulation: parachains must be able to reserve data and execution resources with a bounded time horizon and reliably consume them at an expected rate

Grading each of these properties on a scale from 1-5:

(1): 5/5: Cores are excellent for rate-limiting data and execution burdens
(2): 5/5: Cores are excellent for decorrelating stake behind state transitions, by limiting the candidates that specific validators may back.
(3): 3/5: Cores are reasonably good at amortizing data and availability with bundling. This isn’t a 5/5 as arbitrary bundles aren’t supported, only bundles for state machines pre-scheduled on the core.
(4): 5/5: Cores fulfill the goal of fast execution of candidates immediately after availability
(5): 2/5: Cores function poorly for scheduling data and compute resources. Parachains on cores experience friction with other parachains on the same core pending availability, and parachains seem to need to have some affinity to specific cores, at least while maintaining goal (2). These problems get even worse when a parachain attempts to consume resources from multiple cores simultaneously. Furthermore, cores assume a specific ratio of data & execution time rather than being informed by market supply & demand.

Based on this rubric, it’d seem that cores are, on balance, a good solution - however, goals (3) and (5) are highly significant to the value proposition of Polkadot, in providing coretime to both large consumers and small consumers. In reality, the current architecture makes sure that the system doesn’t use too many resources, but it doesn’t make good use of the system’s resources. It’s natural that our architecture has run into issues here, as it was built with the assumption that 1 parachain = 1 core (i.e. addressing only a thin slice of the blockspace demand distribution).

Solution Sketch

A potential solution: change the algorithm for determining “backing validators” and remove all core affinity for particular validators’ backed candidates. Push the responsibility of putting backed candidates into cores onto the relay chain block author itself.

Note that the key issues with solving (3) and (5) stem from addressing the entire coretime distribution:

  • The long tail of low-coretime parachains. This maps roughly onto the bootstrapping or tinkering audiences which need low volumes of coretime. They may require occasional large candidates or very regular small ones.
  • The fat tail of high-coretime parachains. This maps onto an audience of coretime consumers which are capable of utilizing, in practice, as much coretime as they can acquire. They require regular large candidates at a frequency higher than the relay-chain itself, e.g. 4 or 8 entire cores.
  • Coretime currently comprises two resources (data availability and execution time) expressed in an arbitrary ratio. The actual demand distributions and supply capabilities of the network may not match this ratio and leads to coretime in its current instantiation being an imperfect good.

The frictions we experience in (3) and (5) map onto scheduling tools imagined for those user groups:

  • Bundling: When many small state transitions are backed by the same validators, they can be bundled and amortized by placing them all into the same core simultaneously. Data and execution burdens are combined.
    • Bundling achieves better results when bundles can use “spillover” resources from other free cores and are not tethered to any specific core.
    • The bundling problem: Backing many small state transitions is only viable when there is significant overlap in their backers, for the purpose of bundling and amortizing their costs through availability and approvals
  • Multi-core parachains: Parachains utilizing multiple cores benefit from minimal ambiguity over backers and core affinity for particular state transitions.
    • The current definition of backing groups and core affinity requirements lead to huge scheduling inefficiencies
    • The elastic scaling problem: When attempting to back a sequence of sequential candidates for a single parachain in parallel, core affinity for the parachain or an excessive amount of potential backers causes significant friction

Setting this down concretely implies that the definition of backing group and the implicit core affinity stemming from it are the root of the problem. We are bringing in too many backers for multi-core parachains, introducing coordination overhead. Core affinity for particular backers reduces resource utilization in general. Small state transitions benefit from sharing backers.

Sketch of the solution:

  • backers should be assigned to parachains, not cores. backers may be assigned to multiple parachains simultaneously.
  • backers’ assignments are shuffled regularly and deterministically to avoid liveness issues from offline or censoring backers (just as today groups rotate across cores)
  • any candidate (or bundle) can go in any core as long as they are backed by enough backers from the parachains’ group. The relay-chain block author makes a selection based on what they have received from gossip and is rewarded for packing cores optimally.
  • large parachains should have dedicated backers, and small parachains should share their backers with other small parachains. ideally this property emerges as a continuous one with some nice algorithm for assigning duties
  • some backing groups will be responsible for many small parachains. This is for the purpose of bundling (handling long-tail low-coretime parachains gracefully)
  • some backing groups may be responsible for a single parachain, and the amount of backers in that group would be the minimum amount required to handle the amount of cores needed for that parachain (handling fat-tail high-coretime parachains gracefully)

With an algorithm that assigns backers to parachains more intelligently and adaptively we get much closer to fully solving (3) and (5). It’s not perfect for (5) as Polkadot still sells fixed ratios of data and execution as coretime. However, the decoupling of backing groups from core indices is a useful step in the direction of granular markets here. i.e. we have no hard requirement that a “core” even corresponds to a particular amount of resources anymore.

15 Likes

Related to

Is processes some new term for parachain? If so, it risks confusion, both the rebranding itself causes confusion, but also regular devs use terms like process and task, and those terms are quite generic anyways. Architecture terms like core remain safer since regular devs never touch them directly. Also, we’ll eventually consider multi-chain abstractions, like a swarm of zk utxo parachains ala Daria Hopwood, so if you want to change terminology now then you incur some future proofing burden. We’d ideally figure out more before doing big renames.

Anyways we should clarify the real issue here…

Asynchronous backing permits parachain candidates to build upon slightly older relay parents, so they can be processed at full relay chain speed, without blocking upon the relay chain itself, but each candidate still being in a different step in the backing-availability-approvals process.

We’ll take this further by permitting the relay chain to handle a short sequence of parachain candidates simultaneously, with multiple candidates in the same step in the backing-availability-approvals process. At the relay chain level, we view candidates as being transitions of the parachain’s state root, so this means a short sequence of transitions gets considered in parallel.

At the same time, we should group multiple parachain blocks together whenever possible because this reduces workload, ala Idea: Core Groups · Issue #7441 · paritytech/polkadot · GitHub or https://github.com/paritytech/polkadot/pull/6782. We thus face the question: Among backing cores aka backing groups, availability cores, and approval cores aka execution cores, which handle the same parablock groupings and which differ?

Alright…

We risk that a backed parachain candidate would never be made available aka included. We seemingly exacerbate this risk if we permit parallel candidates in backing, so maybe we’ll never do so, meaning a short sequence of parallel candidates must go through the same backer together. We’ll maybe reconsider this risk more carefully later, but right now I’ll assume backing cores/groups produce a sequence of parablocks bundled together.

As discussed in Idea: Core Groups · Issue #7441 · paritytech/polkadot · GitHub we’re now convinced relay chain block producers should unbundle backing core contents and rebundle them for approval cores, simply because differ parablocks have different resource usage, and doing so can homogenize load. Ideally availability votes should correspond to approvals core, so this rebundling should happen immediately after backing when the candidate receipts first appear on-chain.

We’ve still much to discuss on how backing cores aka barking groups do their work, which maybe the main topic here.

Changed the text to say parachain; process is probably a more general term for non-chain things but let’s avoid confusion here.

We might allow sequential chains of candidates pending availability to have different backers per candidate, but it’s probably not necessary until parachains need 8 or more simultaneous cores. The risks aren’t so much increased as the consequences, but we could probably alleviate both by introducing alternative or fallback routes for availability. I’d prefer to leave this out of scope for now, as it seems like a solvable problem (1. select the minimum number of backers to serve X cores 2. have them coordinate work efficiently 3. mitigate consequences of single backer withholding availability)

we’re now convinced relay chain block producers should unbundle backing core contents and rebundle them for approval cores

I prefer to frame this differently, as it’s not necessarily that backing groups even are “cores” but rather that there needs to be some overarching backing process which is mapped onto availability cores by relay chain block producers, and this needs to work well for both multi-core chains and fractional-core chains. We might only need “availability cores” and “approval cores”, which I don’t see any current motivation to make distinct, as they serve roughly the same purpose (goal (1)).

We’ve still much to discuss on how backing cores aka barking groups do their work, which maybe the main topic here

Yeah, with that, let’s get back to the main topic. What do you think of the solution sketch set down at the end? The requirements for a solution there don’t seem particularly difficult. The main complicating factor is that resource allocations to different parachains can change very often, much more so than our per-session calculation of groups.

It’s basically fine…

  • backers should be assigned to parachains, not cores.

Yes, this sounds fine. Relay chain block producers could arrange candidates into cores, with only one cores notion for both availability and approvals.

backers may be assigned to multiple parachains simultaneously.

Afaik sounds fine. If we hit congestion then we’ll solve that later. It’s a good problem to have.

  • backers’ assignments are shuffled regularly and deterministically to avoid liveness issues from offline or censoring backers (just as today groups rotate across cores)

We still keep them in the same groups though? I’m unsure if our reasons for doing this really matter now.

  • any candidate (or bundle) can go in any core as long as they are backed by enough backers from the parachains’ group. The relay-chain block author makes a selection based on what they have received from gossip and is rewarded for packing cores optimally.

Yes, the block author who includes the backing statements.

  • the amount of backers assigned to a parachain should be in proportion to the amount of coretime allocated to it in the near future.

This drops our assumption that backers can do the whole bundled sequence. That’s doable, but we need to make some choices, including as to what problems we consider to be our problems vs parachain teams problems.

  • some backing groups may be smaller, and will be responsible for many small parachains. This is for the purpose of bundling (handling long-tail low-coretime parachains gracefully)

We’ve this idea that validators apply back pressure by doing less backing. After we land rewards then we’d likely enforce this by making approvals be worth more than backing, net costs like bandwidth. We could then simply let backers advertise their capacity, so if you’ve a bigger node then your node has more spare cycles, and you earn more.

We’re likely a ways off from doing rewards, so maybe some of this could be punted till then.

1 Like

Not clear if that’s important or necessarily what we get from it, except for maybe predictability in networking or some smoothing out of variance caused by coordinating censoring validators. The main properties we need here are near-term (~1 minute) predictability & rotation of duties. Reusing the existing logic for shuffling validators every session and then rotating the list every minute seems fine here.

In what sense? Backers are running with multiple CPU cores, so even a single backer or a small group of two could handle backing all the sequential candidates at once. “In proportion” is a bad wording: backers should be responsible for some amount of total near-future coretime, and their responsibilities are assigned through processes, not cores. Large parachains should have dedicated backers, as few as needed, and small parachains should share backers with other small parachains. We also need some “wildcard” assignment for on-demand scheduling.

That’s pretty elegant, as long as we keep availability rewards the dominant factor to avoid a race-to-supercomputer for validators. Though this seems related to the algorithm for selecting backers I’d prefer to leave it out of scope for now.

Yes it’s fine, proportional but bounded.

We need back pressure anyway so right now the parachain team suggests we apply back pressure all the way to backing right away.

Any validator has more approvals assignments than backing jobs, so we’d wondered if validators could reject some of their approval checker assignments. Yet, we’ve never figured out how this impacts security, so probably a bad idea.

Instead, we now suggest overloaded nodes simply delay their own backing work, which brings no soundness consequence. It’s also a reason to have smaller backing groups, so the system feels the back pressure sooner, which should improve performance but harm liveness.

In principle, we could even eliminate backing groups all together: We define a random permutation V of the validator indices based upon the epoch randomness and relay parent block height mod 3 or whatever. We separately maintain a list C of parachains weighted and sorted by their likelihood of buying a slot. Your parachain_index is your location in C, which we map into V. Alfonso maybe knows approximation for this packing problem, but one dirt simple one goes:

let d = parachain_index / num_validators;
let i = parachain_index % num_validators;
let backer_zero = if d % 2 = 0 { i } else { num_validators - i - 1 }
let backer = (backer_zero + backer_priority) % num_validators;

We access any backer we like in V by increasing backer_priority, but cost increases rapidly as backer_priority increases, so parachains should typically send to backer_zero or accept being back pressured if backer_zero feels overloaded right now.

It’s free to maintain V of course, and even simple using this I’ve no idea how even define the weighting in C, much less maintain C itself. We could just move parachains around in C whenever the produce a parablock I guess.

I think the PoV size and execution time for the parachain blocks needs to be known in advance for any algorithm that wants to assign backers to parachains efficiently. It can also work, if the core time the parachains buy also has a constraint around the maximum execution limit to fractions of a single core. WDYT ?

In case of disputes, we already backpressure by we dropping bitfields and backed candidates because we prioritize the dispute statements on chain vs anything else. So, we’d prioritize the PVF execution to work on dispute checks first, then approval checking and backing lastly.

How does this relate to bundling for approval checking? I expect the CandidateCommitments to include execution time for candidates and we could be even more efficient down the pipeline if we bundle things further as ExecutionGroups like in Idea: Core Groups · Issue #607 · paritytech/polkadot-sdk · GitHub

We do probably need to set minimums of data/compute per candidate to account for constant overheads of validation. Setting strictly specific sizes is probably unnecessary, though maybe some rounding up algorithm is desirable. This topic deserves a forum thread of its own, but this problem should be either solvable with incentives or with mandates. By incentives, I mean that chains using less than a full core for a candidate would be charged extra when they aren’t bundled with others (i.e. they didn’t make packing easy for backers). By mandates I mean that the protocol would be very strict with the particular candidate sizes which are allowed. I’d like to lean towards the incentive design space rather than the mandate design space where possible.

The general issue we’re running into is that chains may not coordinate on which sizes of candidates they produce. e.g. one chain might be producing candidates which are 60% of the core. You can’t pack two of these together, and it only packs cleanly with other candidates which are 40% of the core.

We likely want some rounding schedule, where there are fixed amounts of resource utilization that candidates get rounded up to. A linear rounding schedule might result in bands like this:

  1. Between 0-20% → 20%
  2. Between 20-40% → 40%
  3. Between 40-60% → 60%
  4. Between 60-80% → 80%
  5. Between 80-100% → 100%

But it’s probably better to set an exponentially decaying rounding schedule with some fixed bottom size like 1/32 or 1/16:

  1. (1/2 + ε) to 1/1 → 1
  2. (1/4 + ε) to 1/2 → 1/2
  3. (1/8 + ε) to 1/4 → 1/4
  4. (1/16 + ε) to 1/8 → 18

These are essentially “coretime frequency bands”, which are orthogonal to coretime consumption (amplitude, in the wave analogy).

Let’s start another topic to go over this?

Here’s a more concrete sketch of what this might look like:

// The (maximal) rate of coretime consumption of this chain
// This is a number treated as a rational over 57600 
// (from RFC-5), where 57600/57600 means one core. This value may be greater than 57600 to signify
// that the chain has more than one core assigned.
CoretimeAssignments: ParaId -> CoretimeRate, 
BackingGroupAssignments: ParaId -> BackingGroupIndex,

// Total coretime rate of all parachains assigned to the backing group.
BackingGroupUtilization: Vec<CoretimeRate>, // len: number of backing cores

// Possibly useful: a partition of group indices around a pivot point.
//
// This is a pivot point + a vector of all group indices (len: number of backing cores) with
// the property that all entries in this vector before the pivot point are full, and all entries in this vector
// after the pivot point are empty.
BackingGroupPartition: (u16, Vec<BackingGroupIndex>);

The rough idea is this:

  • Partition validators into backing groups (as we do now)
  • We track total coretime assignments (a single u32) across all assigned parachains to a backing group. The target consumption rate of a backing group is 1 core’s worth of coretime, but this target is soft. Validators rotate across backing groups, same as today.
  • Each parachain has a mapping to its total coretime consumption rate (a single u32).
  • Each parachain is assigned to a backing group. Many parachains may be assigned to a single backing group
  • Increasing or decreasing the coretime consumption rate for a parachain is O(1) - just update the coretime mapping and the total rate of its assigned backing group. Note that backing groups at this point may be poorly balanced or overfull.
  • When a parachain does not have a bucket yet but has coretime added to it, a group is chosen arbitrarily (or we could use the BackingGroupPartition to choose the first non-full backing group)
  • Validators post rebalance_backing(Vec<(ParaId, BackingGroupIndex)> transactions to the chain (using the new Substrate Tasks API). These transactions are only valid if the new rebalancing is strictly better than the previous one. This runs in O(n) time in the length of the vector (touching 2 groups per mentioned parachain - the old group and the new group), followed by an O(n) fn evaluate_improvement which requires that the new balancing is strictly better than the old one.
  • We may also consume weight in on_idle to eagerly rebalance groups, though this requires a reverse index.
  • The relay chain runtime, when verifying backing signatures, just needs to check the BackingGroupAssignments mapping for the parachain. We will keep around some small amount of historical group assignments for parachains to make asynchronous backing work (i.e. we must know which group a parachain was assigned to at some recent relay-parent).

This approach is inspired by the Phragmén election code in sp-staking, and the staking-miner.
The only missing piece is the evaluation algorithm, which seems like it is easily achievable. The rough interface is here:

// Evaluate the improvement of a rebalancing transaction based on the changes in backing groups and the number of steps.
// 
// This could either take _all_ backing groups or just the ones touched.
// The input is `(group_index, state_before, state_after)`
fn evaluate_improvement(
  buckets: &[(BackingGroupIndex, Bucket, Bucket)],
  steps: u32,
) 
-> u32;

For the moment, the main thing we want to avoid is groups which are overfull. Therefore, a simple algorithm might evaluate the improvement as:

  • 0 if there are any new overfull groups
  • Some positive number when there is a reduction in overfull groups
  • or 0 if the steps taken are over some constant maximum (say, 10 or 20). This avoids wasting time. (some stricter relationship to avoid degenerate situations where there are improvements to be made but which can’t be made in a short transaction may be important here)

Though I am curious to hear if there are strong arguments for doing better balancing (the main motivation would be reducing backing reward variance, as far as I can see) and what proposals there are for better algorithms. It might be useful to “pack” earlier groups and leave later ones empty, for instance. We have to make a tradeoff between tolerating over-full buckets and minimizing churn in the allocations of chains to groups

With assigning backers directly to parachains, do we still even need the concept of cores? I’ve always found core regions very confusing and think this is simpler. Secondarily, and I realize this isn’t the most important concern for protocol design, but the metaphor of a distributed computer feels outmoded and I think a more market-based allocation model would better reflect Polkadot’s similarity to more recently designed systems.

Here’s how I’m thinking about this at a very high level (I may be missing some things, particularly around availability). If we examine the role of backers, they’re essentially proposers in other off-chain protocols and have two functions:

  • Staking on the validity of state transitions
  • Ensuring data becomes available

We’d like to decouple these in order to have better resource management.

The former involves a relationship between backers and parachains. This is where resources among individual approval checkers are managed, since backers make claims about execution time and total slashable stake per committee (according to each of our schemes for slashing approval checkers).

This can even be an open set, e.g. collators could choose to stake on their own claims without being Polkadot validators or make out-of-protocol arrangements with backers. In practice, I think cost would make this unlikely vs. auctions and on-demand, and I’m not sure we’d gain anything from it, but all that matters for safety is that the stake is sufficient.

Ensuring data becomes available involves a relationship between backers and relay chain block producers. Since availability defines inclusion, and we foresee message passing and availability recovery as bottlenecks in scaling out the validator set, this is where overall system resources are managed.

I don’t think it makes sense to call the latter function “cores”. They’re more something like “blob-units” (not an elegant term) and can be bundled independently of the backer-parachain relationship, although I’m not sure how this intersects with using a fast path for recovery directly from backers or differentiating full witness data from post-state.

No, we don’t. This is a step in the iteration of Polkadot’s design towards selling data availability and compute resources directly.

True, though this assumes a level of market depth and maturity which currently doesn’t exist. Some level of enshrined scheduling + things like tips (for relay chain block authors to make decisions when slow availability limits throughput), would begin to develop the market for offchain coordination. I believe in the long run it is likely that enshrined backing validator assignments could go away entirely.

Though balancing availability-distribution responsibilities is also probably important in practice - on the extreme end, if 1 validator backed all the candidates, it’d also be responsible for erasure-coding and distributing the data to all other validators.

Yeah, “core” may not be the best term for that functionality, but I’m trying to be incremental in terms of language adjustments and would prefer to leave this beyond scope for now.

2 Likes

After some discussion with @eskimor -

The main drawback of probabilistic scheduling is that candidates can expire before they are put into a relay chain block, leading to wasted work. Many of the comments in RFC-3 were about this.

Probabilistic scheduling works better the longer candidates can stick around. There is an asynchronous backing protocol parameter called allowed_ancestry_len which dictates the maximum age of a candidate’s relay-parent. Improving this is mostly about better garbage collection in the implementation, but in theory it could be set to several minutes long.

1 Like

We need something to vote on in availability bitfields, and in approvals assignments and votes. This something is still accessed by a small numerical index, because that’s required by bitfields and VRFs. Also, this something now becomes larger than a single parachain block. It’s true this something only does part of what the old core did, but the term seems innocuous.

Job or task are more accurate terms of course, but they’re already heavily overloaded in software terminology. We audit parachains so you could call them audits or dossiers maybe.

1 Like

It just came to mind that there also exists another boundary: Group rotation boundaries. If a group receives more advertisements than it can handle until the relay parents go out of scope, then this would also lead to wasted effort (on the collator side). But group rotation is already a minute. If we we stick to regions to being relatively small (minutes), this is probably not a big deal and might not even matter in practice.

1 Like

Isn’t this really more a parachain feature than some global setting? If a parachain P makes few blocks, then a backer could sit on a parablock from P for longer until it finds time, and then the relay chain could sit on the candidate for longer until it fits nicely. If we’ve few backers allowed for a parachain, then we could’ve them wait until rotation, and the relay chain could wait for quite a while.

Above I kinda discussed some approaches to minimize the probabilistic element, but only given some estimation of how likely each parachain is to make a block.

Yeah, this is the goal. I also like your approach from a few posts prior that groups could in theory be assigned to “every” parachain, but they might get rewarded less for backing stuff outside of their initial mandate. This would smooth over things like inactive backing groups in the case that candidates sit for too long without being backed - they’d just get picked up by some alternative group.

What I mean is that in practice candidates do currently expire and that expiry time is based on their relay-parent (anchor) block’s age. We should work to make the expiry time for candidates long enough that it is effectively not an issue. This is an orthogonal project to enhance probabilistic scheduling.

We should limit connections somehow, so if you’re merely paying backers less than you should cap backer_priority, but maybe larger than a backing groups, so 7 vs 3 or 10 vs 5 or whatever. I suggested every pairing only by making the collator pay more, but prices become unreachable fast, so functionally this adds little, maybe reduces connections by 50%.

How do we negotiate connections now? A collator opens a connections to their backers, maybe proves it has an upcoming slot, sends the announcement consisting of the PoV block header, so then finally the backer requests the body if interested? Or do we somehow have the backer opening the connection?

I wondered if directed gossip maybe possible, so validators gossip PoV headers towards low enough or much lower backer_priority or something, but unsure why the collator has some connection to another relay chain node.

Anyways I’m unsure how we’d estimate how likely each parachain is to make a block, probably that’s too simplistic a perspective if they have different payment methods, reasons, etc for making the block.

And another boundary which might become problematic: sessions. We currently invalidate anything that was backed in a previous session on a session boundary. If we build up significant amount of work (e.g. worst case minutes worth of candidates) this would be a serious waste and friction.

1 Like

It’s not ideal but is very much out of scope here IMO (the session boundary restriction is totally orthogonal to what we’re discussing in this thread).