A Case Study in Misaligned Incentives: The Polkadot UX Bounty

As many of you know, I’ve been critical of the UX bounty since its inception. Back then, my main concern was that, despite good intentions, the main promoters of the bounty were making a rather shallow analysis of the situation. Looking at the backlog they were putting together, it already seemed impossible to me that the underlying issues some of us had been pointing out for years, the ones I see as the root cause of Polkadot’s bad UX, could be solved through a “UX bounty”.

I remember discussing this with many people at the time. My two main concerns were:

Unfortunately, as time went by, things turned out even worse than I anticipated. In this blog post I want to explain why I believe that’s the case.

Before going into the criticism, I do want to acknowledge that a few good things have come out of this bounty, such as:

However, as you’ll see, these examples are the exception in what I consider a very problematic bounty. Let’s look at the issues one by one.


Misaligned incentives

According to the public SSOT spreadsheet for the UX bounty, the total spend to date is $339,626 USD. Out of that:

  • Roughly 80% of all funds have gone directly to curator-controlled addresses (Braile ~41%, DH3 ~22.2%, Niftesty ~7.4%, Flez ~4.2%, Nino ~3.2%, etc.).

  • About 3% went to Husni, for work on an initiative led by one of the curators (Niftesty).

  • The only clearly significant spends that appear to be independent from curators themselves are:

    • Velocity Labs (Turtle), around 6.8% of total spend.
    • Jelly Studio audits, around 4.77% (assuming, in good faith, that Jelly Studio is not controlled by any curator. I don’t have evidence either way).
  • Everything else is comparatively small.

On top of that, around 44.4% of all expenses are justified as “operational” (curator fees, “bounty ops”, management, coordination, calls, meetings, etc.).

Taken together, this paints a very worrying picture:

  • The same small group of people both control the bounty and receive the majority of its funds.
  • Nearly half of the budget is being burned on “ops” just to keep the bounty machinery running, regardless of whether it produces any real value.
  • The fraction of funds going to initiatives clearly outside the curator circle is surprisingly low for something that is supposed to be a public bounty.

In other words, the incentives are set up so that:

  1. Curators are strongly rewarded for keeping themselves and their close collaborators busy (more audits, more coordination, more “bounty ops”), not necessarily for generating durable and meaningful UX improvements for the ecosystem.
  2. There is very little structural pressure to say no to low-impact work, or to stop funding things that aren’t working.
  3. Because most of the spending circulates within the same group, there is a natural tendency toward a kind of “mutual validation loop”: everyone signs off on everyone else’s work because it’s in their direct interest to believe (and present) that the bounty is doing a great job.

This is how you end up with what in the past I described as the bounty “capturing value from existing inefficiencies rather than helping to resolve them”. Instead of tackling the structural UX problems that affect many products, the system mostly incentivises:

  • Writing more reports.
  • Running more product-specific audits.
  • Logging more “operational” hours.

All of that looks busy and measurable on paper, but it certainly doesn’t bring us closer to the actual goal: making Polkadot fundamentally easier to use.

The core issue here is not that the curators are bad people or acting in bad faith. It’s that the design of the bounty effectively turns them into both judge and beneficiary of their own work.

When >80% of the money flows to the people running the program, and almost half of it is labelled as operations, it becomes extremely hard to distinguish between actual public-good UX work and a self-reinforcing circle of activity whose main outcome is to justify its own continuation.

The absurdity of the UX audits

Another major sink for bounty funds has been the so-called UX audits. According to the bounty’s own numbers, these audits represent roughly 33% of the total spend. That is an enormous share of the budget for something whose impact on the ecosystem is, at best, unclear.

These audits are entirely product-specific. Each one focuses on a single dApp, service, or website. In the best-case scenario, that might help one product make some UI tweaks or think differently about a particular flow. But it does nothing for other teams facing the same structural problems. There are no shared patterns, no reusable libraries, no reference implementations that others can adopt. Once the report is delivered, whatever value there is stays locked inside that one project.

For example, they commissioned a UX audit for PDP while the project was still very much a work in progress. The audit (carried out by one of the curators themselves, of course) cost 30,000 USD. Despite that price tag, it failed to identify even a single concrete case where the UX could be meaningfully improved by rethinking the underlying chain interactions. It stayed at the surface level, instead of tackling exactly the kind of deep, structural UX issues this bounty should be addressing.

When I asked for evidence of the actual utility or impact of these audits, what I got back were “satisfaction surveys” and appeals to authority. Things like: “the teams liked the audits”, “we received positive feedback”, “people rated them 4+ out of 5”. That is not the kind of evidence you’d expect for something consuming a third of a public bounty.

What I was expecting to see were concrete, measurable examples, for instance:

  • “Audit X led to change Y in product Z, which reduced the number of steps in this flow from 5 to 2.”
  • “We redesigned this chain interaction based on audit feedback, and now users no longer have to deal with pre-images / multiple transactions / confusing metadata.”

We still have nothing like that. No clear chain from “audit delivered” → “changes implemented” → “measurable improvement in UX”.

This leads to an obvious question: if these product-specific audits are as valuable as claimed, why aren’t teams paying for them themselves?
That is exactly what product-specific consulting is for. If a project feels it benefits from a bespoke UX review, there is nothing stopping them from:

  • Proposing their own treasury request, or
  • Paying from their own budget / revenue, or
  • Finding whatever UX consultant they prefer on the open market.

Instead, the UX bounty has positioned itself as a centralised Subsidiser of UX Consulting, where:

  • The treasury pays,
  • A small set of bounty-aligned consultants earn fees, and
  • The outputs remain mostly non-reusable, benefiting at most a handful of products.

This is a terrible alignment of incentives. The more audits they run, the more money flows to the same circle of providers, regardless of whether:

  • The recommendations are implemented,
  • The UX actually improves in a measurable way, or
  • The ecosystem gains any reusable insights.

For a public bounty, funded by the Polkadot treasury, this is the worst of both worlds: product-specific work subsidised as if it were a public good, without the accountability or shared value that true public goods require.

To top it up these audits consistently fail to identify opportunities to:

  • Abstract away protocol/implementation details from end users.
  • Reduce the cognitive load of using dApps by hiding these low-level concepts.

The Polkadot-UI library fiasco

Another worrying example of poor judgment from the UX bounty is the Polkadot-UI library initiative.

From the very beginning, different people with deep experience in frontend libraries and tooling warned that the approach being taken was fundamentally flawed. Despite this, the bounty:

  • Ignored knowledgeable feedback and pushed ahead anyway.
  • Entrusted a complex, ecosystem-level tooling task to a curator who had never built a library before.

Even as it became clear that:

  • No one had managed to build a single production-ready dApp using the library
  • Not even its own authors could showcase a functional and non-trivial working app built on top of it

…the library was still heavily promoted:

  • It was heavily promoted via multiple X accounts, including the official Polkadot account.
  • It was strongly recommended for hackathon participants, effectively steering newcomers toward it as if it were the “modern” way to build Polkadot dApps.

The reality speaks for itself: as of today the library has fewer than 10 downloads per week.

This wouldn’t be such a big deal if it were just an experiment. The problem is that, branded and marketed as “Polkadot UI”, this library convinced a number of developers that Polkadot’s tooling itself is immature. When in fact the problem is probably not so much the modern tools, but the specific library the bounty chose to back and promote so aggressively which is simply a leaky and broken wrapper of these modern tools.


The uncomfortable question: why is this bounty still alive?

I could also spend time explaining why other initiatives under this umbrella, like the “Data Analytics Program” or the “Community Feedback Program”, have been incredibly ineffective at best, or even actively detrimental at worst. But this post is already long enough, and I’d rather focus on something even more worrying:

Why is the W3F not closing this bounty?

Several people, myself included, have the impression that this bounty has enjoyed unusually strong protection and involvement from the W3F.

For example, during their first OpenGov proposal, the referendum was failing for most of the decision period. It only turned around once our friends at opengov.watch (a project funded directly by the W3F) began lobbying very actively in favor of it. A similar pattern appeared with their later refill proposal. These are not isolated events; taken together with other signals, they create a strong perception that this bounty is not being treated like a normal, independent initiative.

Other indicators reinforce that impression:

  • A consistent unwillingness to seriously scrutinize the bounty’s practices, despite multiple red flags (including the fact that it doesn’t even operate like a real bounty, but more like a mini-program run by and for its curators).
  • A significant share of “operational” spending attributed to coordination calls and “synchronization work with W3F”, which blurs the line between independent execution and foundation-aligned operations.

When the same small circle of people:

  1. Designs the bounty,
  2. Executes the work,
  3. Receives the majority of the funds, and
  4. Appears to enjoy W3F backing/protection,

…it becomes very hard to trust that there is any meaningful, independent oversight of whether this is a good use of treasury funds.

Now that the W3F has a new mandate to look after OpenGov spending, I think it’s time to acknowledge past mistakes and correct course. Closing this bounty would be a powerful signal that things really are changing.

If the W3F wants to show that it is serious about stewardship and about the responsible use of treasury funds, this is exactly the kind of program that should be reevaluated.


EDIT:

I decided to create an OpenGov proposal (Polkassembly Link, Subsquare Link) to address this.

Also, I forgot to ask, if you are going to post a response:

  1. Please stick to the content of this blogpost and to the facts: avoid switching topics, personal attacks, conspiracy theories, etc
  2. Anon accounts please refrain yourself from creating noise in this post. In fact, if what you have to say doesn’t justify the existence of an anon-account, then please use your own personal account. :folded_hands:

EDIT II:

I invite everyone to flag as “off topic” any comments (specially if they come from anon accounts) that try steer the conversation to other topics. This includes the comment from @dandan. What we are discussing here is what to do with the UX bounty at this particular point inn time. Nothing else, nothing more. If someone wants to start a conversation about a somewhat related topic, please open a new post, don’t put noise into this one.

3 Likes

(post deleted by author)

However, this bounty serves as a prime example of broader OpenGov transparency issues with W3F oversight. My comment aims to address the root cause highlighted in your case study, without derailing the thread.

Do it in a different post, stop being a troll