A Case Study in Misaligned Incentives: The Polkadot UX Bounty

As many of you know, I’ve been critical of the UX bounty since its inception. Back then, my main concern was that, despite good intentions, the main promoters of the bounty were making a rather shallow analysis of the situation. Looking at the backlog they were putting together, it already seemed impossible to me that the underlying issues some of us had been pointing out for years, the ones I see as the root cause of Polkadot’s bad UX, could be solved through a “UX bounty”.

I remember discussing this with many people at the time. My two main concerns were:

Unfortunately, as time went by, things turned out even worse than I anticipated. In this blog post I want to explain why I believe that’s the case.

Before going into the criticism, I do want to acknowledge that a few good things have come out of this bounty, such as:

However, as you’ll see, these examples are the exception in what I consider a very problematic bounty. Let’s look at the issues one by one.


Misaligned incentives

According to the public SSOT spreadsheet for the UX bounty, the total spend to date is $339,626 USD. Out of that:

  • Roughly 80% of all funds have gone directly to curator-controlled addresses (Braile ~41%, DH3 ~22.2%, Niftesty ~7.4%, Flez ~4.2%, Nino ~3.2%, etc.).

  • About 3% went to Husni, for work on an initiative led by one of the curators (Niftesty).

  • The only clearly significant spends that appear to be independent from curators themselves are:

    • Velocity Labs (Turtle), around 6.8% of total spend.
    • Jelly Studio audits, around 4.77% (assuming, in good faith, that Jelly Studio is not controlled by any curator. I don’t have evidence either way).
  • Everything else is comparatively small.

On top of that, around 44.4% of all expenses are justified as “operational” (curator fees, “bounty ops”, management, coordination, calls, meetings, etc.).

Taken together, this paints a very worrying picture:

  • The same small group of people both control the bounty and receive the majority of its funds.
  • Nearly half of the budget is being burned on “ops” just to keep the bounty machinery running, regardless of whether it produces any real value.
  • The fraction of funds going to initiatives clearly outside the curator circle is surprisingly low for something that is supposed to be a public bounty.

In other words, the incentives are set up so that:

  1. Curators are strongly rewarded for keeping themselves and their close collaborators busy (more audits, more coordination, more “bounty ops”), not necessarily for generating durable and meaningful UX improvements for the ecosystem.
  2. There is very little structural pressure to say no to low-impact work, or to stop funding things that aren’t working.
  3. Because most of the spending circulates within the same group, there is a natural tendency toward a kind of “mutual validation loop”: everyone signs off on everyone else’s work because it’s in their direct interest to believe (and present) that the bounty is doing a great job.

This is how you end up with what in the past I described as the bounty “capturing value from existing inefficiencies rather than helping to resolve them”. Instead of tackling the structural UX problems that affect many products, the system mostly incentivises:

  • Writing more reports.
  • Running more product-specific audits.
  • Logging more “operational” hours.

All of that looks busy and measurable on paper, but it certainly doesn’t bring us closer to the actual goal: making Polkadot fundamentally easier to use.

The core issue here is not that the curators are bad people or acting in bad faith. It’s that the design of the bounty effectively turns them into both judge and beneficiary of their own work.

When >80% of the money flows to the people running the program, and almost half of it is labelled as operations, it becomes extremely hard to distinguish between actual public-good UX work and a self-reinforcing circle of activity whose main outcome is to justify its own continuation.

The absurdity of the UX audits

Another major sink for bounty funds has been the so-called UX audits. According to the bounty’s own numbers, these audits represent roughly 33% of the total spend. That is an enormous share of the budget for something whose impact on the ecosystem is, at best, unclear.

These audits are entirely product-specific. Each one focuses on a single dApp, service, or website. In the best-case scenario, that might help one product make some UI tweaks or think differently about a particular flow. But it does nothing for other teams facing the same structural problems. There are no shared patterns, no reusable libraries, no reference implementations that others can adopt. Once the report is delivered, whatever value there is stays locked inside that one project.

For example, they commissioned a UX audit for PDP while the project was still very much a work in progress. The audit (carried out by one of the curators themselves, of course) cost 30,000 USD. Despite that price tag, it failed to identify even a single concrete case where the UX could be meaningfully improved by rethinking the underlying chain interactions. It stayed at the surface level, instead of tackling exactly the kind of deep, structural UX issues this bounty should be addressing.

When I asked for evidence of the actual utility or impact of these audits, what I got back were “satisfaction surveys” and appeals to authority. Things like: “the teams liked the audits”, “we received positive feedback”, “people rated them 4+ out of 5”. That is not the kind of evidence you’d expect for something consuming a third of a public bounty.

What I was expecting to see were concrete, measurable examples, for instance:

  • “Audit X led to change Y in product Z, which reduced the number of steps in this flow from 5 to 2.”
  • “We redesigned this chain interaction based on audit feedback, and now users no longer have to deal with pre-images / multiple transactions / confusing metadata.”

We still have nothing like that. No clear chain from “audit delivered” → “changes implemented” → “measurable improvement in UX”.

This leads to an obvious question: if these product-specific audits are as valuable as claimed, why aren’t teams paying for them themselves?
That is exactly what product-specific consulting is for. If a project feels it benefits from a bespoke UX review, there is nothing stopping them from:

  • Proposing their own treasury request, or
  • Paying from their own budget / revenue, or
  • Finding whatever UX consultant they prefer on the open market.

Instead, the UX bounty has positioned itself as a centralised Subsidiser of UX Consulting, where:

  • The treasury pays,
  • A small set of bounty-aligned consultants earn fees, and
  • The outputs remain mostly non-reusable, benefiting at most a handful of products.

This is a terrible alignment of incentives. The more audits they run, the more money flows to the same circle of providers, regardless of whether:

  • The recommendations are implemented,
  • The UX actually improves in a measurable way, or
  • The ecosystem gains any reusable insights.

For a public bounty, funded by the Polkadot treasury, this is the worst of both worlds: product-specific work subsidised as if it were a public good, without the accountability or shared value that true public goods require.

To top it up these audits consistently fail to identify opportunities to:

  • Abstract away protocol/implementation details from end users.
  • Reduce the cognitive load of using dApps by hiding these low-level concepts.

The Polkadot-UI library fiasco

Another worrying example of poor judgment from the UX bounty is the Polkadot-UI library initiative.

From the very beginning, different people with deep experience in frontend libraries and tooling warned that the approach being taken was fundamentally flawed. Despite this, the bounty:

  • Ignored knowledgeable feedback and pushed ahead anyway.
  • Entrusted a complex, ecosystem-level tooling task to a curator who had never built a library before.

Even as it became clear that:

  • No one had managed to build a single production-ready dApp using the library
  • Not even its own authors could showcase a functional and non-trivial working app built on top of it

…the library was still heavily promoted:

  • It was heavily promoted via multiple X accounts, including the official Polkadot account.
  • It was strongly recommended for hackathon participants, effectively steering newcomers toward it as if it were the “modern” way to build Polkadot dApps.

The reality speaks for itself: as of today the library has fewer than 10 downloads per week.

This wouldn’t be such a big deal if it were just an experiment. The problem is that, branded and marketed as “Polkadot UI”, this library convinced a number of developers that Polkadot’s tooling itself is immature. When in fact the problem is probably not so much the modern tools, but the specific library the bounty chose to back and promote so aggressively which is simply a leaky and broken wrapper of these modern tools.


The uncomfortable question: why is this bounty still alive?

I could also spend time explaining why other initiatives under this umbrella, like the “Data Analytics Program” or the “Community Feedback Program”, have been incredibly ineffective at best, or even actively detrimental at worst. But this post is already long enough, and I’d rather focus on something even more worrying:

Why is the W3F not closing this bounty?

Several people, myself included, have the impression that this bounty has enjoyed unusually strong protection and involvement from the W3F.

For example, during their first OpenGov proposal, the referendum was failing for most of the decision period. It only turned around once our friends at opengov.watch (a project funded directly by the W3F) began lobbying very actively in favor of it. A similar pattern appeared with their later refill proposal. These are not isolated events; taken together with other signals, they create a strong perception that this bounty is not being treated like a normal, independent initiative.

Other indicators reinforce that impression:

  • A consistent unwillingness to seriously scrutinize the bounty’s practices, despite multiple red flags (including the fact that it doesn’t even operate like a real bounty, but more like a mini-program run by and for its curators).
  • A significant share of “operational” spending attributed to coordination calls and “synchronization work with W3F”, which blurs the line between independent execution and foundation-aligned operations.

When the same small circle of people:

  1. Designs the bounty,
  2. Executes the work,
  3. Receives the majority of the funds, and
  4. Appears to enjoy W3F backing/protection,

…it becomes very hard to trust that there is any meaningful, independent oversight of whether this is a good use of treasury funds.

Now that the W3F has a new mandate to look after OpenGov spending, I think it’s time to acknowledge past mistakes and correct course. Closing this bounty would be a powerful signal that things really are changing.

If the W3F wants to show that it is serious about stewardship and about the responsible use of treasury funds, this is exactly the kind of program that should be reevaluated.


EDIT:

I decided to create an OpenGov proposal (Polkassembly Link, Subsquare Link) to address this.

Also, I forgot to ask, if you are going to post a response:

  1. Please stick to the content of this blogpost and to the facts: avoid switching topics, personal attacks, conspiracy theories, etc
  2. Anon accounts please refrain yourself from creating noise in this post. In fact, if what you have to say doesn’t justify the existence of an anon-account, then please use your own personal account. :folded_hands:

EDIT II:

I invite everyone to flag as “off topic” any comments (specially if they come from anon accounts) that try steer the conversation to other topics. This includes the comment from @dandan. What we are discussing here is what to do with the UX bounty at this particular point inn time. Nothing else, nothing more. If someone wants to start a conversation about a somewhat related topic, please open a new post, don’t put noise into this one.

7 Likes

However, this bounty serves as a prime example of broader OpenGov transparency issues with W3F oversight. My comment aims to address the root cause highlighted in your case study, without derailing the thread.

Do it in a different post, stop being a troll

I 100% agree regarding the misaligned incentives and the need to close this bounty. I don’t think anyone did anything wrong; it’s just the curse of bureaucracy in effect, where being seen as doing something is more important than the result of that thing.

UX audits are the perfect example of this. They were done in the name of ‘doing something of value,’ but no thought was put into whether this really offered value to anyone outside those getting a free audit—or if recipients were only accepting them to avoid ruffling feathers.

The Polkadot-UI library was always destined to fail. The lack of experience was an issue, but the broken dev funnel was probably the biggest problem. It really does not matter how much it got promoted inside of Polkadot; it was not needed here. Those who would have used it have the knowledge to build their own tools, taking into account their own needs.

In the end, this is another failed experiment that solved nothing because it had no end goal, just a vague idea of the problem Polkadot is currently facing. I hope we can all learn something from this and move forward.

If I had a wish list for future DevEx/UI funding programs:

  • Pay based on market value: Funding should be based on the open market—can this person command this price elsewhere?

  • Real goals with metrics: We need clear targets to measure success.

  • Proper stewardship: The goal should be to spend only when necessary.

  • Aligned Incentives (The “Rising Tide” Model): To help the program become almost self-funding, we should lock the DOT grant at the price it is given, but only unlock the funds upon hitting metrics. This acts as a massive incentive: if the work improves the ecosystem, the token price rises, and the grant becomes more valuable. A true “rising tide lifts all boats” approach.

  • Performance-based Curators: Curator pay should be based on results, finally fixing the misaligned incentives.

Thanks @josep for taking up this initiative. I hope this perspective adds value to the discussion.

1 Like

My personal take on this is that there are/were bounties which were paid to source and pay projects like MB vs. UX bounty, which is fundamentally in-house consulting. So for inhouse consulting, it actually makes sense that the in-house operational costs are high vs. for MB it did structurally not make sense.
As in-house consultants basically do the work by themselves vs. MB basically just outsourced work.
I do not have any relationship to UX bounty. However, I wanted to highlight the structural difference between bounties.

2 Likes

Thanks for sharing your perspective. A few clarifications from my side:

  1. On bringing up the Marketing Bounty

I don’t really understand why the Marketing Bounty is being brought into this discussion. I never referenced it in my post nor used it as a comparison point in the arguments.

The critique I’m making is specifically about:

  • How this UX bounty is structured,
  • How funds are being spent, and
  • How incentives are aligned (or misaligned) for this particular bounty.

Whether other bounties were good, bad, or something in between doesn’t change the facts around this one.

  1. On what a bounty is (and who should get the funds)

There surely are different “shapes” of bounties. However, regardless of the shape, a bounty should not be a structure where the curators themselves end up receiving the overwhelming majority of the funds. That breaks the basic expectation that:

  • Curators are there to steward funds and surface good work, not to act primarily as a permanently funded team.
  • Individual initiatives and outputs could, in principle, stand on their own if they had to go through normal OpenGov scrutiny.

One of my central points is exactly this: if you took most of the things that are currently funded under this bounty (audits, library work, general “ops”, etc.) and submitted them as standalone proposals, subject to the same level of attention and questioning that other proposers receive, I think we all know that many of them would be very unlikely to pass.

The bounty, as it exists now, largely functions as a way to bundle and shield these initiatives from that normal scrutiny while still consuming treasury funds.

  1. On the idea of “in-house consultants”

I honestly don’t know what “in-house consultants” is supposed to mean in the context of Polkadot:

  • “In-house”… in which house?
  • This is a decentralized network where each product has its own “house” (its own team, governance, roadmap, and responsibilities).
  • Who decides which products are eligible for this “in-house” treatment, and based on what criteria?

If, tomorrow, a completely nefarious organization spins up a parachain, are they now eligible for subsidised “in-house UX consulting” too? If not, why not? Who draws that line?

To me, that concept is fundamentally at odds with how OpenGov and the treasury are supposed to work. A treasury-funded bounty should be aimed at transversal, public-good concerns (cross-cutting tools, patterns, standards, etc.), not at providing product-specific consulting services to a select set of teams.

If teams want private, product-specific UX consulting, that’s totally fine. They should just pay for it themselves, via their own treasury proposals or their own budgets.

I submitted several UX issues through the UX Bounty website in early October but haven’t received updates on whether or how they will be addressed. It would be great if community UX feedback could be more fully leveraged.

2 Likes

On their last top up REEEEEEEEEE’s feedback was

The response received

As far as I can tell or know (I could be wrong), there are no generic components that were ever created They said they would deliver what we wanted and were expecting in Q3 :man_shrugging: – Maybe this is the polkadot UI stuff and I just have no idea what’s going on??

This (UI) is definitely an issue that needs to be solved. Fortunately some other teams are now working on exactly that per recent refs.

2 Likes

Thanks for bringing this up and for actually taking the time to submit issues.

In my original post I wrote:

What you’re describing is exactly one of the parts I didn’t elaborate on to keep the post from turning into a book, so let me expand a bit now that you’ve raised it.

The “Community Feedback Program” is, in my view, another example of how this bounty operates in practice:

  • The focus is on collecting and logging issues and being able to say “look, we have a community feedback pipeline and incentives”.
  • Much less attention is given to actually driving those issues to resolution with the teams who own the products.

In other words, the incentives are set up to reward:

  • Having a nice form,
  • Creating a lot of buzz on social media,
  • Having a public notion page,
  • Being able to show numbers and screenshots in meetings,

…but not to ensure that the underlying problems are fixed.

From the outside, that plays very well in presentations (“we compensate users for finding issues”, “we have X feedback entries”), especially to people “upstairs” who aren’t close to the day-to-day (cough, W3F, cough). But if the reports then just sit there with no resolution, then well… we just lost an opportunity to actually fix them.

So I’m genuinely sorry that you took the time to report issues and got silence back. That’s exactly the kind of mismatch I was trying to point out.

Practically speaking, if you want those UX issues to be addressed, you’re usually better off going straight to the teams that actually own the products:

  • If something is related to the polkadot-api libraries I (and my team) maintain, or to polkadot-js (which I try to help with in my spare time), I’m more than happy to at least take a look or help route it.
  • Likewise, issues with specific dApps or sites are much more likely to get attention if they’re reported directly to those project’s repos/channels.

The people running this bounty can at best act as a broker or middle-man, and so far they haven’t shown that they can reliably close the loop between “community report” and “issue resolved”. That’s a structural problem with how the program is set up, not with you or your feedback.

Thanks again for sharing your experience here. It’s useful concrete evidence of the gap between the narrative and the actual outcomes.

2 Likes

First of all, thank you for taking the time to analyze the current functioning of the bounty. I really appreciate the effort and attention you’ve put into reviewing the process.

I agree with you that UX audits should not be funded by the ecosystem treasury, but instead handled individually by each project. UX is an iterative process that evolves as a project grows and should remain a core part of each team’s ongoing strategy, especially as new features are introduced.

Regarding the issues reported, most of them are related to inconsistencies on the Polkadot.com website, including:

  • Pages lacking a parent category due to inconsistent URL slugs, which removes the navigation bar and makes browsing the site harder

  • Some URLs point to suboptimal pages; updating them to better destinations would enhance navigation

  • Documentation pages with inefficient structure, forcing users to make extra clicks to find information

  • Categories with overlapping content that could be merged to simplify navigation

  • Outdated references to inactive projects, which could confuse users

I believe addressing these points would help improve usability and create a more consistent experience for users across the site.

I remain fully available to W3F to discuss the best way to share and channel these suggestions, should that be helpful.

2 Likes

Here it’s important to understand that the Polkadot.com website is/was handled by Distractive and is tied to W3F @faraday .

We all raised points to improve the Polkadot.com website, the UX bounty tried to move things, including myself with my Bridges’ analysis, which at least resulted in creating a Bridge page that didn’t exist before.

The issue is that the page was designed by Distractive, without following the UX Bounty guidelines….. So that’s not the page we wanted.

It’s a partial win in the end: we got the page which is better than having nothing, but the data and the design are not the ones we wanted in the end.

So, at some point, you can have all the will you want to move things and make them better, you’re hitting a wall. It can’t be the UX bounty’s fault for everything.

There are also many political games that the community is not aware of.

1 Like

I would add another win: [UXB-3] - Protective Measures for transfer to CEXs - #2 by ThomasR

I worked on it for months to improve wallets UI/UX and prevent users from being rug by design with the over complexity of XCM transfers from parachains to CEX for DOT/KSM.

So many users lost assets because it was not obvious in wallets that they couldn’t XCM from parachains to CEX directly (where most dapps prevented it cautiously).

How many assets lost? How many users that never returned to the ecosystem because of their loss? A lot.

I had recurrent cases to deal with doing Bifrost’s support, and I saw recurrent issues in wallets TG channels too.

And you know what, after wallets have changed their UI, adding warnings and adding controls, suddenly these cases dropped to zero… At least from my support perspective.

I didn’t have to deal with any new cases since.

That’s not something that advanced users noticed, but people doing support inside the eco won’t say it wasn’t a minor issue, it was a major reputation issue, the eco was seen from user’s perspective as scam network.

Cheers.:clinking_beer_mugs:

5 Likes

You can’t have it both ways:

  • Either you are upfront about which issues are in-scope and which aren’t, and you clearly redirect people to the right place,
  • Or you don’t set up a “community feedback program” that invites users to report issues you can’t meaningfully act on.

The problem here is not “hitting a wall”, it’s that the program over-promised and under-delivered:

  • The people running it can’t fix most of these issues themselves.
  • They don’t seem to have a robust process to get them fixed by the teams that actually own the products.
  • So in practice, they end up as an inefficient middle-man, collecting reports that mostly go nowhere.

And they knew this from the beginning. The primary output of the program is not “resolved UX issues”, it’s a narrative they can present: “we compensate users for feedback, we have a feedback pipeline, look at our forms and dashboards”. This is just another of their fugazzis. It’s all smoke and mirrors, a narrative, with no sustance behind it.

So no, the UX bounty is not “at fault for everything”. But if you set up a program that:

  1. Actively solicits UX reports,
  2. Can’t meaningfully process or resolve most of them, and
  3. Still uses the existence of that program as evidence of “impact”,

…then it’s fair to criticise the design and honesty of that program.

The polkadot.com website saga probably does deserve its own Netflix series, but we should try to stay focused on the bounty itself in this thread. :folded_hands:

1 Like