Coinstudio paid $3400 per month as curator. Excuse me?

I continue to find it deeply concerning that Coinstudio is still receiving $3,400 per month as a curator. I have asked multiple times about their concrete responsibilities and where the time dedicated to this role can be verified, yet I have only received vague answers and deflections.

If we consider that the IBP pays other operators $85 per hour for administrative tasks (e.g., StakePlus), the implication is that Coinstudio is effectively billing 40 hours per month for this bounty ($3,400 / $85). This raises serious questions about what work is actually being performed and how it is being measured.

It is also reasonable to assume that Coinstudio dedicates part of their time to managing their own validators and engaging in commission-related activities, which further blurs the line between curator responsibilities and personal interests.

CoinstudioCommission flipping
Coinstudio / 01Commission flipping
Coinstudio / 02Commission flipping

What I find most troubling is that the curators overseeing this bounty—many of whom are affiliated with Parity—appear willing to tolerate or overlook these abuses. This sets a very poor precedent and undermines trust in both the process and the people responsible for safeguarding it.

Hi, have you tried reaching out to Tom as he suggested in his post above? He would be the best qualified to answer your questions.

The only curator affiliated with Parity is me.

To bring the discussion back to verifiable facts, below are the actual statistics for the three validator accounts based on staking.validate filtered extrinsics recorded on Subscan.

Validator 1
https://polkadot.subscan.io/extrinsic?address=14d2kv44xf9nFnYdms32dYPKQsr5C9urbDzTz7iwU8iHb9az

  • Total staking.validate calls: 21

  • Time period: 2023-09-12 → 2025-03-18


Validator 2
https://polkadot.subscan.io/extrinsic?address=16cT2wjqq18WJdNwzeDvm57GgiQHhaQeWCrA5ZUPyKhyujtF

  • Total staking.validate calls: 100

  • Time period: 2024-03-02 → 2025-11-01

  • Calls in 2025 alone: 24

Interpretation (Validator 2):
The higher number of staking.validate calls for this validator is explained by periods of inactivity in 2024 and earlier. During those periods, both staking.nominate and staking.validate extrinsics were submitted more frequently in order to re-check eligibility and restore the validator’s stake to an active state.


Validator 3
https://polkadot.subscan.io/extrinsic?address=16ZrzTmq8yZ3Mq4LQZxQAaoNx4fStAp9neH8M62iNYks87bv

  • Total staking.validate calls: 23

  • Time period: 2025-03-16 → 2025-11-01


The numbers above clearly show how many times staking.validate extrinsics were issued by each validator and over what time period.


What is really concerning here is the constant aggressive tone and repeated attempts at defamation without first checking the facts or even attempting to speak with the people involved. This is not the culture we have tried to establish here over the years.

I would like to remind you about topic where this was already discussed:

Hi @Megadot! I’m concerned that your recent messages are adding a lot of churn and are coming across as personal toward certain people.

Please keep in mind that W3F is already funding OpenGov.Watch to do this exact kind of oversight. This work can be done in a way that’s more effective and more constructive: by keeping the signal-to-noise ratio high, avoiding reputational harm, and validating claims before making them public. In practice, that means contacting the people you’re raising concerns about via private channels first, giving them an opportunity to respond, and escalating publicly only with clear evidence. Good news for you is that they are the people that can help you do exactly that, also they are funded until the end of the month, so take advantage of that while you can :wink:.

I strongly recommend collaborating with them and stepping back from public comms. They can help channel your efforts into a process that’s effective, fair, and focused on facts… so it doesn’t come across as personal or adversarial.

I’d love to introduce you to @jeeper: I think you’ll get along well. You both care about protecting treasury funds and improving oversight, and you seem to approach evaluations in a similarly structured way. You even share the same kind of tendencies when it comes to evaluate how functional/dysfunctional bounties are. For example, neither of you had any concerns about the UX bounty. That’s why I think that you two will get along very well. It might be helpful to collaborate and align on a consistent framework for judging bounty performance.

Some folks have speculated you might be an alt of @jeeper because of the many similarities. I asked him directly yesterday and he confirmed that’s not true. He also said he’d prefer people participate openly rather than anonymously. So, please, help each other!

What worries me is the impact this is having on perceptions across the community. The current approach is fueling speculation about whether these anonymous accounts are being supported (directly or indirectly) by the W3F. Even if that’s not true, the optics are damaging and it undermines trust in governance. It also creates the impression that internal capacity is being used to create drama, and then justify a large team to manage said drama, rather than focusing the resources on reducing it and focusing on the important work.

I can understand why the speculation comes up (I even had that thought for a moment) but on reflection I don’t think it holds up. I’m confident W3F has higher-priority work than enabling avoidable drama.

That’s why I’m asking you (and any other anonymous accounts involved) to help de-escalate here :folded_hands:. @Megadot, please coordinate directly with @jeeper so concerns can be handled through the W3F-funded governance process: gather evidence, reach out privately to the people involved first, and only share publicly once facts are verified and there’s something actionable to report. This will reduce unnecessary drama and lower the risk of unfair accusations.

If you’re open to it, I’d also suggest letting @jeeper be the public point of contact for updates while you focus on the research.

To protect trust in the process, I think it’s important that anonymous accounts stop hiding themselves. Please coordinate with the W3F governance team so allegations are evaluated rigorously and any misunderstandings are resolved.

Thanks in advance for considering my suggestion! :folded_hands:

I would kindly ask that you stop whatever personal vendetta you appear to hold against me. The fact that you have, for some reason, granted yourself a moral high ground from which you blame, point fingers, and attack people across the forum says more about your own conduct than about the people you are attacking.

I also ask that you stop invoking my name from posts made by anonymous accounts, whether on this forum, Subsquare, or other platforms.

I will repeat wise words from the post above:

gather evidence, reach out privately to the people involved first, and only share publicly once facts are verified and there is something actionable to report.

What we are seeing instead creates the impression that internal capacity is being used to generate drama, and then justify a larger structure to manage that drama, rather than focusing resources on reducing it and on doing the important work.

I responded as I did because I’m under no obligation to respond to a random alt account. Nothing about your questions are uncomfortable.

It is an undeniable fact that, based on on-chain data, several validators (Coinstudio included) have been caught engaging in bad practices, including running validators under multiple identities and performing commission flipping, abusing the system to the detriment of nominators. This has been clearly documented in other forum threads and medium posts.

In your case, regardless of attempts to downplay it or argue that it was done “less” than others, in my opinion the responsibility is the same.

It is not ethical for a validator to advertise a 0% commission for most of the day, only to exploit a narrow time window—when the protocol records the commission that will apply to the next era—to raise it to 5%, 10%, or even 100%, and then promptly revert it back to 0%.

This pattern has been repeated consistently over long periods of time. As a result, validators have extracted hundreds or even thousands of DOT from nominators who reasonably believed they were delegating to low-commission validators. The harm is not only financial, but also reputational, as it undermines trust in the staking system as a whole.

****

Returning to the topic of this thread, I believe it is entirely legitimate for community members to ask for transparency regarding how funds are spent in certain bounties, especially given past excesses and abuses that ultimately led W3F to intervene and shut some of them down.

In your specific case, Coinstudio operates multiple validators (between 15 and 20 by my estimation) across Polkadot, Kusama, Paseo, and several system chains, which already requires a non-trivial amount of time and operational effort.

In addition, Coinstudio acts as curator for Bounty 31 and Bounty 25, receiving approximately $2,500 per month ($1,180 and $700 respectively acording to their spreadsheets).

Bounty 31 - DOT :backhand_index_pointing_down: b1

Bounty 25 - KSM :backhand_index_pointing_down:
b2

For this reason, your role as curator in Bounty 50 – Infrastructure Builders Program, and the $3,400 per month compensation associated with it, raises serious questions. Given that this bounty pays $85 per hour for administrative tasks, this compensation implies roughly 40 hours of full-time work per month dedicated exclusively to this bounty.

My question is therefore clear and direct:
Please provide a detailed breakdown of the administrative tasks you perform on a day-to-day basis for this bounty to justify the $3,400 monthly compensation, and, if possible, provide verifiable evidence to support it.

It is difficult to reconcile how you can simultaneously manage a large fleet of validators across multiple networks, act as curator for Bounties 31 and 25, and also be effectively employed full-time as curator for Bounty 50 – Infrastructure Builders Program.

Given that this is one of the bounties managing the largest budgets, I believe the community is fully justified in asking for clear answers and concrete evidence to properly assess the work being done—rather than vague or evasive responses, which is all that has been provided so far.

This follow-up (re: IBP curator pay / auditability and the broader validator-set discussion) is exactly the kind of “institutional surface” that tends to get waved away as social drama — until it hardens into a durable chokepoint.

In “message number 25” of this thread, the claim being argued is basically: (a) certain validator operators can exploit timing + UI observability gaps (e.g., commission flips / identity splitting), and (b) at the same time, large budget rails (bounties / curator roles / reporting) can become hard to audit in practice, creating trust erosion and a concentration vector. oai_citation:0‡Polkadot Forum

This is why I keep pushing collaboration around RFC-0162:

  • RFC-0162’s framing is: treat institutional capture / chokepoint formation as a security failure mode, and force every future change to include a Market Structure Impact analysis (i.e., make these attack surfaces explicit, reviewable, and bounded). oai_citation:1‡Polkadot Forum
  • It also explicitly does not prescribe treasury procurement policy, so the goal isn’t “use RFC-0162 to litigate one bounty.” The goal is to stop building systems where auditability and contestability are optional. oai_citation:2‡Polkadot Forum

Concrete suggestion for anyone willing to engage productively:
Let’s propose an RFC-0162 follow-up amendment that adds observability/auditability invariants for protocol-adjacent “utility rails” (dashboards, reports, registries) that the ecosystem treats as canonical in practice—so the default is verifiable history, not “trust me” narratives.

If you’re sympathetic to that direction: please jump into the RFC-0162 review thread and propose one concrete invariant / metric / monitoring requirement we can actually implement.