Polkadot Security Question

Hey All,

Maybe a more involved security question. As many of you are aware, Polkadot has forkless upgrades, which allows the system to innovate at an outstanding pace. Technically (from a technical standpoint), forkless upgrades are more secure and reliable than a hard fork. With that said, in my opinion, making something easier to do from a technical standpoint may introduce risks at the process/people levels. In other words, making something easier to do will in essence encourage folks to make updates more often and maybe with less vigilance (hence the increased risk).

My concerns are not the small to medium sized events that could go wrong in any upgrade (downtime, misappropriation of funds, etc.), but more so the low probability events with ecosystem ending implications. In my experience working with developers, they get lost in the details; as a result, they miss the elephant standing right in front of them. I don’t mean this as an insult. Being a developer requires incredible attention to detail, but big picture is needed as well.

So I was curious if Parity/Web3/Fellowship/other parties ran automated code audits BEFORE and AFTER staging the code in governance. I want to emphasize AFTER for two reasons: 1) auditors generally do not re-audit the code after they have provided their findings and the developers “fix”, and 2) assuming item number one is true, this allows an insider threat to potentially update other code after the audit.

Again, I am not worried about small to medium sized events that though while not ideal could be overcome. I am curious if automated code audits include the following type of “significant event” checks:

  • Use of “Force Transfer” at scale
  • Something that could trigger validator wide slashes at scale so the “Un-slashed” DOT could perform a governance attack
  • Use of the mint function
  • Changes to the governance protocol itself that then could allow subsequent governance attacks
  • Adjusting inflation to an amount that would in essence be an over mint attack
  • Etc - large attack vectors (not to get lost with small ones)

I would imagine that the aforementioned items are rarely adjusted, so was wondering if there are automated code checks that can be ran by multiple parties BEFORE and AFTER staging. Would also assume that this would help parachain teams as well as most of this applies to them.

While all of these are low probability events, I would imagine we would want to push them as close to zero as possible given that even low probability events are possible of occurring with enough time and enough frequency.

Let me be clear, I am NOT saying that forkless upgrades are bad or that governance is bad. I do believe the pros outweigh the cons. While I have significant faith in Polkadot, I would be lying if I didn’t say I was a little concerned when watching the fellowship call to find out the days lock associated with governance had an error when initially pushed. I don’t blame the dev as anyone could have made that mistake, but what concerned me was that an adjustment to the governance protocol itself didn’t send red flares for multiple checks by multiple parties (I am assuming such a call required root [let me know if I am wrong]). The error was small, but what if there was an insider threat with something more nefarious? Would it have been caught?

Not sure if @joepetrowski or @rphmeier or @bill_w3f would know.

FYI, I have experience working with CMMI level 3 organizations, so I am familiar with change management processes.