OpenGov Case Study 2 - Flagged Proposals

This discussion aims to share our policy regarding flagged proposals on and to explore together what would be the best possible solutions post 🚩 as the next steps moving forward.

Red Flag System

The flagged badge indicates the status of a proposal or specific task, signifying that it has not been successfully delivered for various reasons and carries a negative connotation.
For a proposal to be marked as flagged, it must meet specific criteria, which we will outline below.

Currently, there are six flagged proposals on OG Tracker, demonstrating that the majority of proposers are fulfilling their promises.

Additionally, our reports regarding flagged proposals have gained substantial engagement, underscoring the importance of this topic within the community.

OG Tracker Current Approach

As previously mentioned, there are certain criteria that need to be met for a proposal to be flagged:

  1. Exceeded Deadline Duration: The deadline duration, including any extension period if applicable, has been surpassed.
  2. Lack of Information: No relevant information has been shared by the proposer, nor has been found by the OG Tracker team.
  3. Unresponsive Team: The team is not responding to our calls or has ceased communications.
  4. Inadequate Explanations: The team responds to our calls, but their replies are unclear or irrelevant.
  5. Lack of Transparency: No transparency report is provided, or there is no visible proof supporting or confirming their claims.

Note 1: For flagged proposals, the OG Tracker team always provides a comment in the OGT Review section explaining why the proposal has been flagged.

Note 2: A proposal’s status can change from flagged to delivered if the proposer fulfils the promised tasks.

Note 3: When the proposal’s duration ends, it automatically enters the Final Assessment period, during which we make our last communication attempts to the involved parties and finalize our findings.
This period lasts approximately two weeks, giving the proposer enough time to respond to our queries.

Reflecting on the results of our current approach over the past three months, we can confidently state that it is functioning effectively while maintaining a balanced approach.
However, we are continuously seeking ways to improve our product and overall operations.


OG Tracker has the data!

We invite the entire Polkadot community to explore together what would be the next step as an optimal solution to minimize the probability of flagged proposals or even eliminate them entirely.

  1. What measures could prevent proposers from under-delivering?

  2. What further actions should be taken to those who fail to deliver their promises and how should these actions be implemented?

Existed practices that trying to counter the problem:

Bounties and Collectives
Great solution with numerous benefits.
However, at this current stage there are plenty of operational concerns and ongoing debates challenging their effectiveness.

Proof Of Work with Successful Track Record
A successful PoW demonstrates reliability and trustworthiness while enhancing reputation within the community.

Fully Doxxed Teams
Being fully doxed might improve transparency to some extent but does not guarantee the desired outcome.
We have seen multiple examples where completely anonymous teams or individuals produced greatness and fully doxxed ones just disappeared with the funds.

Current positive results from OGT operations:

Future Attempts
A documented history of poor performance prevents proposers from securing additional funding in the future.

Damaged Reputation
A bad reputation leads to diminished credibility and a loss of confidence by the vast majority of the community.
Once its damaged, it becomes extremely hard to reverse it, resulting in direct and impactful consequences.

(Please feel free to add any missing extras in the comment section.)

Our goal is to refine these approaches and develop new strategies to ensure and strengthen even further credibility and trust within our ecosystem.

Your valuable feedback is essential and highly appreciated as OG Tracker is a collaborative tool that always strives for unified efforts towards improving accountability and the overall OpenGov experience!

Contact us on X or at contact@ogtracker. io

Thank you.

1 Like

Reputation systems are an unsolved problem and fail for a few reasons:

  1. Where they lack sufficient exogenous reputation factors.
    Some of your proposals mitigate this, by adding human labour and external data into the loop.

  2. Where there are too many, or too few, rounds (that’s rounds in a game-theory sense)
    2a) Where there are too many rounds, reputation systems can fail by creating a positive feedback loop - a reputation, once damaged, is hard to repair, creating an incentive to ‘spend’ what remains of it for short-term benefit (and long term complete loss of the reputation)
    2b) Where there are too few rounds, having high reputation does not have sufficient value to incentivise honest behaviour against a player’s short term interest.

  3. They are rarely sybil-resistant
    And personally, I believe the compromises necessary for sybil-resistance (such as KYC) are intolerable.

  4. They are necessarily subjective (because, if they are fully objective, they must be endogenous and therefore game-able)
    Subjective is the lesser of the evils but means that there will be a bias towards insiders, whales, and those with the machiavellian tendencies to play the ‘game’ better than others (on the flip side of that, those with less social skills, or less time to devote to the ‘game’ will be disadvantaged)

2b) is a particular worry, as grifters, and teams/ individuals on the edge of leaving the ecosystem (for example due to bankruptcy) will see the reputation system as a minimal (or single) round game, and therefore have little disincentive to burn reputation for short term gain. Whereas those in it for the long term, precisely the players we would wish to favour, will be inhibited, by treating it as a multiple/ many round game.

Most of these can be mitigated with some tweaking but the fundamental problem with systematising reputation is that an optimal reputation system is a holy grail in the first place, and never more than a couple of steps from a couple of steps from failure even if you do get there.

So I’m arguing not against the use of reputation, but against systematising it by aiming for an optimal solution.

Rather, I think the best solution is, firstly, to share and make available reputation-relevant information, but never to favour one source, metric or method; and, secondly, unfortunately, to always keep on our toes, watching out for the latest ways to circumvent what we are watching out for.