More active voters, less painful proposals! - Squidsway: a tool to make governance, proposal and voting data understandable by all. SEEKING FEEDBACK

I’m planning to submit this proposal for Squidsway, so posting here to get feedback:

Squidsway overview:

The ‘product’ is:

  1. an ongoing research project, with the reports and insights that will generate, and
  2. the open-source tools which will be built to generate those insights.

I am looking to:
index and curate onchain data, combined with offchain data, in order to generate insights which support healthier democracy within OpenGov - see ‘Aims’.

(Onchain data, eg: voting data and timings; delegations; wallet age/ size/ activity
Curated onchain data, eg: tagging proposals and voting blocs, such as ‘treasury conservative’; ‘resubmitted proposal’; ‘DV cohort 5 delegate’
Offchain data, eg: mostly proposal discussion, but also tweet volume, token price, sentiment analysis)


Aims:

Reduce Information Asymmetry:

  • To reduce the information asymmetry between unconnected voters / proposers and those who are well funded / connected.

  • Reduce the time cost of making informed votes

  • Reduce the time cost of making successful proposals.

  • Improve the quality of proposals and reduce the number of unsuccessful proposals rejected for predictable reasons.

  • Reduce mistrust in the ecosystem by providing objective data in place of rumour and supposition.

Increase voting:

  • both by reducing the time cost involved in meaningfully voting

  • and identifying patterns, especially in non-voting, to generate evidence-based insights to inform other initiatives to increase voting (eg via UX, incentives, publicity)


Individual voters (and non-voters) get:

  • Easily organised information on the voting and outlook of each DAO, allowing them to assess where and whether to delegate.

  • Organised data to compare some current proposal against comparable historical ones.

DAOs get:

  • Increased visibility to potential delegating voters / members

  • Objective data to counter FUD

  • Less proposals rejected for reason of proponents failure to understand the guidelines that each DAO may generally recommend

  • and less time spent in OpenGov discussions explaining these to proponents

Proposers get:

  • Concrete data on what kind of proposals pass or fail, based on amount, time, type or other qualitative measure but, also (and in particular), regarding technical guidelines (eg, “since 2024, X% of proposals requesting DOT failed, but Y% requesting USDC succeeded”)

OpenGov gets:

  • Concrete data on non-voting versus voting wallets, in order to better identify pain points in voting, and give us the option to target non-voters (with incentives, awareness, etc.) to encourage voting

  • Concrete data on Decentralised Voices, allowing W3F to improve the program without relying solely on subjective and incomplete means of assessing improvements.

  • More delegation by individual voters.

  • The success or failure of proposals being better determined by community sentiment, and less by proponents’ failure to adhere to technicalities.


Background:

I’ve been a developer in the Polkadot ecosystem since soon after launch. Before becoming a developer, though, I had over a decade learning inside groups and organisations continually hitting pitfalls trying to follow the aims of decentralisation. (The promise of being able to use game theory, and to encode logic, to mitigate such pitfalls is a major reason why I moved into web3 - or just ‘crypto’, as it was called then ;). In particular, I’ve seen how, even when all actors involved do approach a collective with goodwill, information asymmetry creates power silos which self-reinforce and are hard to shift (even after it becomes clear to those in power silos, as it inevitably does, that the rot creates worse outcomes for them too).
So my motivation in this project is to add another (non-treasury) dimension to the sterling work done by folks like OpenGov Watch and Anaelle collating and publishing treasury information, and help steer OpenGov, the Polkadot DAO, towards being a healthy informed democracy.

The negative patterns we’ve seen develop in political life worldwide over the last 10 years or so are neither accidental nor some deliberate plan. They are the results playing out of formal and informal structures unfit for the internet age. This is the case for informational structures even more so than organisational and human ones. Though in some cases information is hoarded by elites or misinformation deliberately spread at scale, more often, needed information of sufficient detail is simply not easily available in an accessible enough form, leading voters to fail to identify bullshit, to feel unable to make worthwhile voting decisions, or even to feel that crucial information is hidden from them- leading to toxicity. All problems which face many voters in OpenGov.

My aim in this project is to make Open Gov fluid and easier to engage with. Even if that’s not a convincing enough reason for you, though, dear reader, I hope you’ll be convinced by the benefits to the different Open Gov participants above :wink:

Methodology:

Very very agile.

The aim of the research is to tell us something we didn’t know, rather than setting out to prove or disprove a set of hypotheses.

This means that the treasury will, at each stage/ in each proposal, be funding something that it does not know what it will be.
This agile way of working is necessary because:

  • We need to go where the evidence takes us

  • It’s likely that many of each of the small technical steps that make up a milestone can only be identified once a previous step is complete, so identifying and costing out of these each small technical steps in advance would either lead to wasted labour or lead the research down an inflexible path.

The fact that the treasury is funding something unknown should be mitigated by the ongoing nature of the project, and the fact that each funding milestone is a small amount.

Deliverables:

Each sprint will:
generate data,
which I will report and
will contribute to growing the Squidsway tool,
which will be open-source.

MVP phase:
I envision the first sprints to produce only some superficial statistics.
These first couple or so sprints will be to create an MVP to demonstrate progress, with more valuable data and insights coming later on. The reason for not running more sprints within each milestone (ie larger, longer milestones, thus having more by the first milestone) is due to the current conservative and low-trust environment in OpenGov and this being my first funding proposal.

In the first sprints I will deploy the basic indexing infrastructure (likely SQD), collate parent elements (referenda, delegations, etc.) and add components to tag (eg ‘requested_in_DOT’, ‘beneficiary_is_multisig’) chain data.

Validation phase:
The real value of the tool lies in capturing offchain data (when combined with the chain indexing).
A simple example of offchain data is the DOT price.
Slightly more complex would be extracting time and quantitive data on referenda from Polkassembly/ Subsquare.
More complex than that would be to run an LLM over sources like Polkassembly to collect qualitative data, in particular to be able to classify referenda by subject (eg ‘marketing’, ‘ambassador program’, ‘software development’) and other qualities (eg ‘resubmitted ref’, ‘vote nay’, ‘drama’, ‘much detail’).

The research output of the early part of the validation phase is likely to be concrete but boring statistics on uncontentious facts - labour-saving if that’s the info you were looking for, but not especially insightful in itself.

The research output of the later part of the validation phase is likely to be insights based on these concrete stats, which is where I hope the tool will start to demonstrate its future value. My aim is that by the end of this phase we will see both novel insights, and insights which would have been more labour-intensive by other research means.
One of the sprints within this phase will also be to create a module for efficiently indexing time based data (eg DOT price, treasury size).

Future work:
As the tool gets more powerful and hopefully gains acceptance as a potentially valuable part of Polkadot governance, I would like to:

  • Continue adding offchain sources.

  • Routinely produce data viz from insights.

  • Take in suggestions for research directions.

  • Add documentation and helper functions to make it easier for devs to run bespoke governance indexing.

  • Create a UI for LLM-written queries so that non-devs can easily query the data.

Proposal Spacing:

I envision a sprint being between 20 and 80 hours work, and seeking funding (via a new proposal) roughly every two months for between 40 and 160 hours, continuing as long as the community finds it valuable.

Feedback, plz:

This forum post will be added to as I (hopefully) receive feedback.
I hope that some of the folks that read through every proposal can take a moment out to give me some feedback here - I would like to proposal to be close to final form before I submit it as a referendum.

2 Likes

Hello Mork,

Your proposal is exactly the kind of submissions I was looking for when I submitted the RFP: Action Research for OpenGov to the W3F back in February 2024.

I particularly resonate with your statements on Information asymmetry, which some will see as an opportunity to be leveraged for personal gains, while others will perceive as an obstacle to a more holistic ecosystem.

I think that a tool that can be customised to provide tailored but credibly neutral insights for all OpenGov participants is very much needed. But in my eyes, the most valuable part of your proposal is that your team also offers to do ongoing R&D for OpenGov based on these timely insights.

There are a lot of aspects of OpenGov that can and should be reinvented to better fit the current landscape. For example, delegation is always being pushed as the solution to a so-called “low turnout” in OpenGov. But I have written in the past about how there might be some issues in the way voting mechanisms are currently set up that put individual voters and delegators off participating in the first place.

One recommendation I would have about this Squidsway proposal is that its off-chain data should incorporate some form of SEO score of each project, as a measure of the team’s understanding of basic web development and marketing strategies.

Over the past 4 years, I have observed a pattern whereby project teams create noise on socials (Twitter, Discord, Reddit, Polkadot forum) just before they submit a proposal on OpenGov. But what happens to them once they are approved or rejected? Do they ditch their initiatives altogether or keep working to refine them? Do they confine themselves to a niche of known supporters or do they try to engage other market participants? Are they content with getting funding from the Polkadot network alone or do they investigate ways to sustain themselves beyond the ecosystem?

It seems to me that SEO tells the real story and the history of all projects in a credibly neutral manner that is highly desirable. SEO can’t be gamed by buying views, likes, and shares/retweets; so it is worth taking into consideration to avoid bringing in junk data into the analysis.

All the best!

1 Like

Thanks, @anaelleltd for drawing my attention to the Action Research RFP. Looks like it does tie in very closely, especially 3.4 and 3.7 in the example outcomes.

I’m currently inclined to go for the Treasury funding route because:

  • Feels like it has the potential to be more agile
  • Deliverables include a reusable tool as well as the research outcomes
  • which would be, in a sense, owned by the community if funded through OpenGov (my primary duty would be to the community, ie OpenGov, ongoing basis) and, in future, directable by the community - thus, additionally supporting better social-layer decentralisation in itself (rather than only indirectly via action on insights gained)
  • An aim of Squidsway is to generate insights from more concrete (in particular quantitative) data, with the role of less concrete data (eg sentiment analysis; SEO scores) relegated (though still necessary) and subjective conclusions (eg participants’ motivations; recommendations for action) out of scope. This aim is partly due to striving for credible neutrality, given one of the aims is to reduce distrust and toxicity.

but I much appreciate the pointer and will definitely take that route if OpenGov doesn’t like the proposal. Maybe if there is feedback in that direction, I could somehow split the proposal between the two funding sources.

On the theme of credible neutrality, you mention incorporating SEO metrics. But I come at neutrality from a different direction which maybe wasn’t clear in my first draft.
To take SEO metrics as an example - these would add credibility to a judgement which may be contentious or disputed, and therefore something that participants might want to game.
But I intend to stay away from judgements on subjects which inherently involve incentive and possibility to be gamed or challenged - exactly because no matter how much one improves a fuzzy, subjective or black-box metric (and I’d include SEO in that), there will be many people who still do not trust the metric as being appropriate to represent meaning imputed to it. An example of this would be ‘Is project X gaining mindshare?’. SEO scores may be the best measure by which to answer some question but, then, if there is not a more concrete method than this black-box one, it’s likely the question itself is one I would see as inappropriate for the Squidsway research to answer. (There is nothing to stop devs in the community from slotting a module into the Squidsway tool to answer that question in that way themselves, of course, and I would encourage this.)

Crucially, I understand that there will be many cases where those kind of questions would be useful to the community, but my aim is explicitly to prioritise creating results which gain trust by all. This is not to say that I don’t want the research output or the tool to be used to support lines of argument which necessarily involve non-concrete judgements (for example ‘ROI on marketing spend was X’), but rather that those non-concrete parts of the argument should be made at a different layer of discussion, where the validity of the methodology can be examined and challenged. The Squidsway tool, and research output, can each then be used a foundation for actors with a horse in some particular race to support more subjective arguments if they wish. (Which, again, I would encourage, and is part of the aim).

Over the coming week, I’ll go over the draft again to try to encapsulate these aims in a clearer way.

@mork Just to clarify, the RFP Action Research for OpenGov mandated that teams go through OpenGov to obtain funding for their proposals. However, OpenGov rejected the very few submissions that were proposed in Q4 2024. And so, the W3F closed the RFP at the end of 2024.