Allow "Restaking" of relay-chain stake through custom slashing privileges

The hard part of gathering security for any staking protocol is gathering the amount of on-chain stake required to make the cost of buying out the protocol prohibitive. However, PoS systems have the property that once stake is staked, it can be reused for multiple purposes without inhibiting the security of any of those purposes.

In Polkadot, we can implement restaking by allowing validators to specify a custom AccountId which is allowed to slash them. Parachains themselves can act as accounts on the network, which means that they can implement custom slashing logic that is enacted via XCM to the relay-chain, or in the future to the staking-chain.

This would open up the possibility of parachains adding ‘first-class’ functionality to Polkadot by enabling validators to opt-in to do more work with the same stake, like EigenLayr does for Ethereum.


This sounds like an interesting feature! Where do you see the slashing logic living - do you see this being part of the state transition function of the parachain? I wonder if this has the potential to create some attack vectors. Say the slashing logic exists within the state transition function of the parachain, then some malicious actor could gain enough funds on the parachain to force an upgrade through governance which changes the slashing logic and attacks the restakers on the relay chain. I guess it is the responsibility of the restakers to consider such risks before restaking.


Validators definitely have to be able to trust the restaking protocols, as well as all the inherent risks associated with them. If a state transition function of a parachain is compromised, they certainly could hold any validators using their restaking protocol accountable.

We’d also want UIs like to inform nominators about all the restaking that validators are doing, because it exposes them to further risk. Ideally they’d also be able to get notifications of changes in their nominees’ staking protocols.

Another approach is to allow the posting of “generalized slashing conditions” on-chain, which are Wasm blobs that stakers can opt into, and the game is this: if anyone can provide data which makes the Wasm blob evaluate to ‘true’, or Some<SlashProportion>, or something like that, then the validator gets slashed. However, this imposes much higher complexity on implementation of the restaking infrastructure, while averting the issues with parachain capture. I think this is technically a ‘better’ solution, but much more difficult to implement.

On the flip side, we’d want parachains or smart contracts which handle restaking protocols to have a toolkit for rewarding the restaking validator and nominators of the validator - nominators are taking on additional risk, but should also have the possibility for additional rewards.


This makes sense. I’m guessing the WASM blob pattern you are referring to is SPREE? That would certainly mitigate some of the risks. I guess a way to achieve similar functionality without the requirement of SPREE would be to deploy a common good parachain specialising in restaking functionality. The common good parachain would host the slashing logic and would accept XCMP messages from other parachains containing the slashing proofs you referred to. I wonder how practical “generalised slashing conditions” would be. I wonder if generalised slashing conditions would be applicable to a wide range of protocols or if tailored conditions are typically required.

The Wasm blobs I’m referring to would be something new, not SPREE. SPREE is about enforcing invariants of parachain execution, whereas this is about adding a pluggable accountability layer for validators.

These would essentially be a type of smart contract that would execute on the relay-chain or on some kind of common-good/system parachain, as you suggest.

1 Like

I wonder whether nominators of restaking validators should also opt-in to the restaking. I am not sure informing nominators about additional services is sufficient. IMO, they should also be able to restake a proportion of their original stake, just like the validators.

What first-class functionality are you envisioning? I imagine most parachains would theoretically love to leverage restaking for their collators but that would impose security risks and should not be possible

I can also see oracle services, e.g. providing a probabilistic conversion between relay and parachain tokens to be desirable. Without these, it also seems difficult to determine the restaking parachain rewards as the collateral are relay and not para tokens.

First-class might be data availability, oracles, anything that requires stake & economic security.

Adding an implementation note, I think a rudimentary implementation of this is pretty independent of the current staking/slashing system and can be built purely on top. But if we really want to nail the transparency that you envisioned, we better rethink what it ought to be from first-principles, and bake it into the existing staking system in a more holistic way.

Imagine: In the staking dashboard, you would see a list of all the possible places where a validator is Exposed, and a multiple pallets/sub-systems can implement functionality that implies a validator is Exposed. Most often, the more Expoed one is, the more rewards they can earn.

The most basic act that makes you be Exposed is relay chain block authoring, and on top all of the parachain consensus aspects.

I think even today, it is not super clear in the staking dashboard and PJS apps if/when a given validator is also Exposed in parachain related tasks, on top of block authoring. I actually don’t blame them for this, because even in the Rust code this is not super clear. We need a clear interface for registering a validator as “being Exposed by x amount” and a runtime API/RPC for UIs to demonstrate this.


One thing that comes to mind w.r.t. nominators is that we may want to be able to put restrictions on the types of exposures that validators can add over time, so nominators aren’t exposed to additional risk as they go.

I remember back when writing the paper I had imagined that validators would also be able to “restake” in that they could use their stake as collateral against a multisig Bitcoin wallet for which they controlled a shard. Slashing would happen if they ever signed anything the chain did not want to be signed (or didn’t sign something it did).

The problem with the idea was that it lowered the opportunity cost of misbehaviour as a validator. We must generally assume that any slash is proportionate to (maybe a little more than) the possible benefit derivable to the staker by misbehaving. If you reuse stake then you can run into the possibility that misbehaving in multiple ways (ie over multiple exposures) at the same time can deliver such economic benefit to the staker that it offsets the cost of losing all their stake.

How is this handled here?

As a practical consideration, it’s true that restaking does amplify the amount of benefit that a validator can reap both from behaving honestly and from behaving dishonestly. e.g. if restaked validators are operating a bitcoin bridge, it should be the case that they can’t earn more from destroying the bridge (+ misbehaving on all their other protocols) than they would lose in slashing.

This is separate from the technical implementation of restaking (i.e. generalized slashing conditions), but a few suggestions (which could be combined) are:

  1. Require a minimum slashing threshold on all generalized slashing conditions. This could be 100%, 80%, or some other high value that places a maximal burden on validators for misbehavior.
  2. Require governance (possibly at the root level) to approve generalized slashing conditions to only open up the system to vetted risks
  3. Set a maximal threshold on the total amount of reusable exposure for any particular staked DOT. i.e. validators can restake as many times as they want, but only with some small proportion of their exposure
  4. Ensure that protocols disregard disabled validators. This reduces the time window for exposures to be reused across misbehaviors on multiple protocols.