Path to fast-unstake in Polkadot (+ Kusama)

I want to use this post and shed some light into our suggested path for fast-unstake feature.

Some links:

  1. Original PR: https://github.com/paritytech/substrate/pull/12129
  2. Section in Polkadot Staking Update: Staking Update: August-September 2022

To rephrase, this feature is there to allow those that have staked by mistake, or for whatever reason, are no longer actually exposed (and are thus not earning any rewards anymore – exactly like a recent report here: Only inactive validators after nomination - #3 by kianenigma) to unstake faster.

This pallet already exists in all runtimes. The pallet works of the basis of using leftover block weights, and using it optionally (on_idle, for the technical folks) to progress the unstake process. The pallet consumes only a maximum amount of weight, measured in the number of eras that are checked per block (ErasToCheckPerBlock).

This cap is currently set to 0 in Polkadot and Kusama, and thus the pallet is effectively not doing anything. On westend, the pallet has been enabled for a while now, and has been tested.

After a set of discussions with @ross, the primary maintainer of the staking dashboard (staking.polkadot.network) we estimate that by mid January, this feature will be available in the UI. Therefore, since changing the aforementioned parameter through governance takes time itself, I suggest we begin discussing enabling this pallet.

The testing so far has been mostly positive and we have found very few minor issues.

The most note-worthy issues is that an unexposed nominator who also happened to have been slashed in the past cannot use this pallet. First, this is a very rare case anyhow, because to be slashed, one needs to have been exposed at some point, which contradicts the main requirement of fast-unstake: it is only for unexposed nominators. So, only a small group of nominators will be applicable to this. Moreover, any UI can easily detect this situation and not recommend that fast-unstake process to the nominator if they have been slashed in the past.

The second issues that we have found so far is that the while the fast-unstake process does not take too much weight under any circumstance, it might mistakenly report a very large amount of weight for itself. This might have the consequence of next on_idle hooks in the runtime being starved, but it has no serious consequence on the rest of the system. Moreover, no pallets other than fast-unstake in Polkadot and Kusama runtime are using on_idle.

Both of these issues are fixed in a recent PR.

While I initially intended to propose starting the process of enabling this pallet in both Polkadot and Kusama at the same time, given the two known issues above, I suggest we do the usual path: Enable in Kusama, and if neither of the known issues causes any issues, we move on to Polkadot.

Lastly, this brings us to the question of: how much resources should be allocated to fast-unstake? Recall that this amount is measured in “number of eras checked per block”. In reality, this is also a function of number of active validators, and the final complexity of checking is eras * validators storage reads. This means, in Kusama, where we have 1000 validators, this process would take a lot more weight. Based on my estimates, with ErasToCheckPerBlock = 1:

  • in Polkadot each block we take around 12% of the block weight.
  • in Kusama each block we take around 66% of the block weight.

Again, recall that this amount of weight will only be consumed IFF the block has enough unused weight once all other operations are over.

Most recent Kusama and Polkadot blocks show that they are around 25% full, so both chain should work fine with the above config, and any “fast unstake request” will be fulfilled in 28 blocks ~ 3mins, even in the slowest possible config.


With all that said, the calls needed to enable the feature are as follows.

Polkadot: Polkadot/Substrate Portal
Kusama: Polkadot/Substrate Portal

7 Likes

The final outcome of this, enabling fast-unstake in Polkadot, has passed #111.

5 Likes