What happened?
Around 2023-02-15 22:00 UTC Kusama experienced an uptick in disputes:
Which then resulted in a finality stall. Here is a graph of the finalized blocks:
As visible on the graph above the finality stall lasted for around an hour. An hour later (around 2023-02-15 23:57 UTC) there was another finality stall:
And again at the same time disputes were raised:
Investigation
Considering the symptoms we suspected that the finality stall is caused by the disputes. There werenât too many disputes in total (7 to be precise) but the two events were too correlated to ignore.
First we realised that all finality stalls continue up to 500 blocks (note the blue line):
We have got a safety net in the code for such cases. In nutshell it kicks in when finality lag reaches 500 blocks. Concretely for disputes it considers those older than 500 blocks ancient and they are ignored. This fact lead us to the hypothesis that the disputes are blocking finality somehow.
The only place where a dispute might interfere with finality is the DetermineUndisputedChain
call which is handled by the dispute-coordinator
(the subsystem responsible for disputes on the client side in the polkadot binary) and used to determine the longest undisputed chain. The problem should be in this call.
Finding the root cause
After reviewing the changelogs of the recent versions we noticed that in v1.6.0 we have released a feature named âValidator disabling in Dispute Participationâ. It performs offchain (local for each node) disabling of validators which had raised unjustified disputes. For example if a validator disputes a candidate and the dispute resolves as âValidâ we add the validator to a disabled list and ignore any disputes from it for six sessions.
Unfortunately the logic had a flaw. If an already disabled validator raises a dispute again, dispute-coordinator
will import the dispute and consider it active. However since the initiator is disabled the local node will not participate in the dispute. Neither will the rest of the validators. Bottom line we are stuck at a disputed chain, but the dispute never resolves because no one votes for/against it.
Summary of the problem
So what actually happened on February 15th? A validator raised a dispute, lost it and got disabled. The same validator then attempted to start another dispute. Nodes correctly imported the votes and they did not participate so the dispute never got Confirmed
(1/3 participation). But at the same time the dispute incorrectly remained in Active
state because it was imported, but the only votes seen were coming from a disabled validator. Despite never getting Confirmed
the disputes in their Active
state were considered a potential risk by the chain selection logic (used by GRANDPA) which omitted any blocks including them.
The core of the problem is that those disputes were being marked as Active
(a threat) while they were actually non-threatening. This lead to the huge finality lag.
Another good question is why a dispute was raised on the first place? Unfortunately we canât be 100% sure without logs from the validator which raised it but our investigation makes us believe the reason was a heavy candidate. The candidate under investigation (one of the disputed ones) was quite heavy - 7.5MB uncompressed POV and 1.7 secs (the timeout is 2 secs) execution time on Apple M1 Max CPU which is more powerful than the reference hardware for a validator.
The fix
We resolved the issue with #3358. What it does is when a dispute is triggered solely by Disabled
validators it is imported but not marked as Active
. This way the dispute is not considered a threat and should not mark the fork as disputed and omitted by Grandpa. Dispute will only get marked as Active
once we get at least a single vote from a non-Disabled
validator. In that case the dispute processing continues as for a regular dispute.
Lessons learnt
We take all changes very seriously especially the one related to security (disputes in this case). And still we ended up in a situation where a relatively easy to catch issue sneaked in to production.
After reflecting we came to the conclusion that we didnât spend enough effort on long running tests for this feature. We had an automated test which tested the exact scenario which happened on Kusama (link). However we didnât let the test run long enough so that the problem was uncovered.
Lesson learnt: always add a finality check in each test. After asserting the expected behaviour (what we did) we also need to assert that the network is in a healthy state after that (this we missed).
This problem was further amplified by the fact that we used zombienet-sdk. It is the next generation of our zombienet testing tool but at the moment it supports only native provider. This means that we canât do the tests as part of our CI pipeline but only locally. Even if we had a good test, not running it as part of the CI is a huge red flag. On the other side having a long running test running for each PR will unnecessary delay development. We need to find a good balance between coverage and developer experience.
Lesson learnt: All tests are part of the CI. Period! If you canât automate it - it doesnât exist. To avoid adding unnecessary delay to PR merging we will prepare a test suite which will be run on master on daily basis and can be invoked manually for some PRs.
TL;DR
The incident was a caused by an incorrectly handled dispute which stalled finalization until additional safety measures kicked in and unstalled the chain. The issue was found and addressed.
Acknowledgements
Shout out to:
- @alexggh for the thorough investigation during the incident.
- @andronik for the debugging providing the quick fix.
- @eskimor for the valuable help with debugging and feedback for the fix.
- @Overkillus for the help with figuring out the details around the incident and writing the post mortem.