I think it’s a great discussion and highlights a fundamental gap: we definitely need certainty about proposal content, both now and historically.
What I’m mulling over is: why aren’t we leveraging git more directly for the proposal lifecycle itself?
Platforms like Polkassembly/Subsquare are fantastic governance dashboards, syncing off-chain data between each other (to offer a singular view) and hosting discussion after a proposal is submitted. But they don’t inherently solve the problem of the proposal’s source document(s) being mutable or disappearing, they’re focused primarily on tracking events post-submission. Furthermore, the essence of the proposals, the data, is locked in databases that are only accessible via the APIs the platforms provide.
So then, what if the “source of truth” for a proposal was a specific git commit hash of a common open-gov git (not github) repo?
This hash represents the exact state of the proposal files (proposal.md
, parameters, etc.) at the moment it’s finalized for voting & signed by the submitter. You then reference this hash in the on-chain metadata, and this hash isn’t a floating identifier, but one that is linked to a common repository, that is independently verifiable from the presentation layers (no need for trust). Subsequent edits are immediately visible and easily auditable, without having to rely on the platforms actually activating the feature or ensuring their database remains functional to view historical edits… Also, if these platforms are down, what then, who is currently backing up things and checking the proposal content and the content of the linked documents ?
This repo being used frequently, the risk of data vanishing from it is low, as backing it up is also convenient and extremely cheap if it’s just text data. A policy of “no external links” would then seal the deal completely. I know very well that this would also make data collection and syncing orders of magnitude easier than today where we mostly resort to hitting multiple APIs in a loop, and then from scratch again to catch all the deltas since the last back fill.
This would shift the focus to managing the proposal content with robust versioning and verifiable persistence before it even hits the chain or Polkassembly/Subsquare dashboards. Of course a UI to make submissions easy would be crucial so non-devs can apply too without having to know what rebasing means, but that’s a solvable challenge if the backend is robust: git fits the bill imho.
Doesn’t a git-centric workflow combined with decentralized storage refs (commit hash + set_metadata
as Basti suggests) in the metadata offer a more robust & resilient end to end “less trust, more truth” foundation here?
From experience, always bet on text has been a good rule of thumb. It’s cheap to backup and doesn’t require any big infrastructure for archival & upkeep: tens of thousands of proposals could be stored in 10 MB. Depending whom you ask, additional benefits could be: no random pictures meant to persuade rather than inform, easy parsing by LLMs, etc…
Google docs, PDFs, links to forms are certainly convenient for the proposer, not the community. A git based approach is not without a small inconvenience tax, but what price do we want to put on truth?