The OpenGov.Watch Monitor V2

tldr: I built the dune.com for OpenGov

Two years ago we released an auto-updating Google Spreadsheet that shows all your OpenGov referenda in one screen. We called it: The OpenGov.Watch Monitor.

Over time this tool has grown to show different views on the Polkadot Treasury in super-compact format and it has been feeding into the quarterly Polkadot Treasury Report series

OpenGov.Watch is coming to an end, but there will continue to be the need to keep watching the Treasury. So as a parting gift I have built a website that supports 90% of the queries and visualizations we did for OpenGov.Watch and the Treasury Reports. Put this on in your bookmarks:

monitor.opengov.watch

monitor.opengov.watch

Its a page for the data nerds!

Feautures

Super compact view on all refs, treasury spends, child bounties, fellowship salaries

Complex filters

Allow you to ask questions like “What were the outstanding claims on the Treasury on 2025-12-31?”

Saved views

Allow you to save filters, sort order, visible columns

Dashboards

Dashboards contain components that you can configure individually

SQL Query Composer

The query composer allows to select, join, filter, group by, sort, aggregate functions, complex expression columns and occasionally render options

Under the Hood

Structure

The monitor consists of 3 parts:

  • backend scraper
  • API
  • frontend

The repo uses pnpm.

The backend is historic, almost 2 years old, and was still coded with my bare hands. Written in Python. API and frontend are written in TS. The frontend is React/Vite/Railwind. Tables with TanStack. It is themed in the Second Age style and also has a theme switcher for light and dark mode.

The thing is containerized. It supports to differentiate between production and staging (and local) environments. The config assumes my URLs rn, so you would need to change that.

Data Pipeline

The backend runs against Subsquare. Yongfeng has been super helpful throughout the last years and I am very grateful to him.

In the past the backend sank data into the Google Spreadsheet, now it sinks them into the SQLite database. Data is very light, we have 4MB rn so we don’t need anything more fancy. For the first run, you do a backfill that scrapes the full history. After that you have a cron job running every hour to fetch the latest updates.

There is a custom migration system to allow prod updates after schema/view changes. Database uses WAL and has a reasonable checkpointing logic, but occassional backups after significant dashboard work are still suggested.

There is a backend management interface

Categories/Subcategories and Bounty data is manually managed on the backend. There is support for uploading arbitrary custom tables on the backend also, which are then available for viewing or joining on the dashbaords. For all manually managed tables that are essential for the Treasury report there are also defaults available on the backend.

The backend tries to be optimistic about faulty/unknown data and as long as it is wellformed it is inserted in the database for querying into the frontend. If certain validation fails, the backend produces errors that are visible in the frontend for authenticated users:

Other notes

It has fairly basic user management and managing the dashboard/constructing queries is only allowed for authenticated users. I suppose you could construct pretty nasty queries, which is the main reason to only have this authenticated. It also comes with some basic rate limiting per IP.

On the use of AI

API and Frontend were created for this project and are 100% vibe coded. I have never even looked at the code, I just made sure the architecture is sufficiently reasonable. It comes with a bunch of documentation and unit/regression tests to make sure it holds together, and for the finalization of the project I spent a few million tokens just to make sure they are in sync. CLAUDE.md and docs/CLAUDE.md are human-created, the rest is derived. I also let Claude do 4 complete runs of security audits near the end. In the last two runs I did not have high and medium severity issues pop up.

The repo comes with a few practical Claude commands that I have been constantly using during dev, especially for deployment and backups. And it comes with a react-best-practices skill that you can use when working on or reviewing the frontend.

The repo

  • main branch is the current staging version
  • production branch is the prod version

Feel free to reach out here with any questions. If you want to deploy it for yourself, i might also help. The Claude flow was working pretty well at the end, so I suggest to throw any issues you encounter just as repro into Claude Code and it will very likely be able to figure it out.

PRs will be accepted for now. Please reach out via TG → alice_und_bob

7 Likes

Thank you so much for your passion and commitment to transparency . Also props for this neat tool!