Polkadot-API Project Update

This will be a long post, and I’d ask you to read it slowly and in full. Before we explain what’s happening today with Polkadot-API, it’s crucial to understand how we got here.


Historical context

The Polkadot-API initiative started a little over two years ago, when I decided to leave Parity and convinced @voliva, an exceptionally talented ex-colleague and friend, to leave his comfortable full-time job and join me in this risky adventure.

When I left Parity, I genuinely didn’t know if our first OpenGov proposal would pass. I had received mixed signals about whether we would get support from the “powers that be” at the time. We still took the risk, knowing that Parity’s leadership did not initially want the project to leave the company. In fact, I wasn’t originally going to leave alone: another colleague was planning to join, but Parity’s leadership managed to scare him away, and later they still couldn’t keep him around.

The reason I took that risk was simple: from inside Parity, I could already see dynamics that made me believe it would be impossible to execute on what PAPI needed to become.

Back then (and still today), I believed Polkadot’s biggest structural problem was that too much power and decision-making accumulated in two places: Web3 Foundation and Parity. To be clear: both organizations have many incredibly valuable people. That isn’t in doubt.

But when power concentrates, standards suffer, competition suffers, and so does ecosystem resilience. In my view, the Web3 Foundation’s role should be to fund and steward the research that yields proper standards, standards that enable many independent companies to build interoperable implementations.

What we’ve ended up with is something else. There is a reason Polkadot effectively has one host implementation running in production. It’s the same dynamic that leads Parity leadership to speak of system parachains as “our parachains.” The simplest way to describe it is centralization of power.

When I left Parity, I was optimistic. The narrative was “let’s decentralize.” Those were the days of the “Decentralized Futures” program. OpenGov was new and exciting. The Polkadot spec was still evolving (at least in theory). The Fellowship was becoming functional. Tomaka was progressing on an alternative full node. JAM was coming — this time backed by a more rigorous spec. A new, light-client-friendly JSON-RPC spec was emerging. And since the main maintainer of PJS had vanished, it felt like we’d be forced to replace legacy RPCs quickly.

For a moment, it seemed like the ecosystem could finally take off: standardization, decentralization, openness, a brighter future. Everything was to be done, and everything was possible.

I thought we were at an inflection point. I thought Parity would become leaner over time. I also wanted to prove to other teams stuck inside Parity that meaningful, coordinated, decentralized work was possible from the outside.

In hindsight, that optimism looks naive. But even so: the PAPI team can honestly say we worked relentlessly to push Polkadot in that direction, and we did it with meaningful support from individuals inside Parity as well.

What happened afterward is… not what we anticipated. I won’t unpack the entire evolution here, but it’s fair to say the broader trajectory went in the opposite direction. The why/how is a longer conversation, and for now I’ll keep my personal analysis to myself.

Still, I’m extremely proud of what our team delivered over these two years.

We tried to be an example of how to engage with OpenGov responsibly. We started with a small but ambitious two-month proposal, which (naturally) was met with skepticism. I remember reading the first feedback from CD and thinking: “Good, we’ll prove it.”

And we did. Out of nowhere, @Kheops (whom we didn’t even know at the time) built kheopswap. We released PAPI v1. Things started to take off.

By our second proposal, we added @carlosala, joining at the perfect time so that @voliva and I could attend the Singapore PBA edition, where we both graduated with distinction (and to be clear, we did not charge the treasury for the time we spent learning).

Momentum continued. Then a wild @tien appeared and began building genuinely cool things on top of PAPI. On our next proposal we tried to grow again, but we soon realized a developer we hired was not delivering value proportional to the cost. It would have been easy to bury that mistake. We chose not to. We fired the dev, lost a friend in the process, and returned funds to the Treasury.

We have always tried to treat Treasury funds with the highest level of responsibility. Those funds represent inflation, which means we have an obligation to make sure the value we deliver exceeds the opportunity cost of not burning those tokens.

This leads directly to the next topic.


Our hourly rates

From the outside, our hourly rates may look high. We believe they are defensible, and not only defensible, but cost-effective relative to outcomes and risk.

Outcomes

Look at outcomes first, then cost.

If you compare what we have delivered to what other organizations would have spent to deliver similar results, it should be clear that our value-per-€ is strong.

A concrete example: Parity spent ~2 years attempting to deliver the CAPI initiative. How much did Parity spend producing what ultimately became a large pile of vaporware that practically no one could use? Almost certainly far more than the total amount the PAPI team has received from the treasury since inception.

If you doubt that, consider: for ~2 years Parity paid very high salaries to a team that at its peak had 6 developers. On top of that, there was an Apps team intended to dogfood the library. But since the library wasn’t usable, that team was perpetually blocked, producing feedback loops that went nowhere. How much money was spent for how long without validating the core hypothesis early? It was obviously substantial, and again, likely far above our full cost to date.

And before anyone says “that was before Pierre Aubert,” I’ll just note that dysfunction has not been limited to a single era. Examples that many of you will recognize:

  • Polkadot App was promised for Q3 2024…
  • PDP: it was moving, then canceled mid-way. How much was spent? And why is the code not public so others can continue?
  • Revive: complete fiasco. We got the worst of both worlds — not fully EVM compatible, difficult for Rust devs — and it’s not like those tradeoffs were unforeseeable.
  • Facade: let’s pretend it never happened.
  • PixelProof: I’ll refrain from commenting (for now).
  • RFC launcher: it was never used.

My position is simple: Parity, as an organization, has repeatedly demonstrated that larger budgets do not reliably translate into better outcomes.

So if we’re talking about value-per-dollar, I strongly believe PAPI outpaces Parity by orders of magnitude.

We can also compare outcomes against other OpenGov initiatives. One example: LimeChain’s proposal (which, unless I’m mistaken, had W3F endorsement) to build a PAPI dev-console requested $225K. We spent multiple calls offering them guidance — free consulting — and they ended up “heavily inspired” by what @tien had built with DOTConsole. They delivered something quite disappointing, and then returned with another proposal asking for an additional 172K.

We built our dev-console largely as “extra delivery,” often after hours. Only in the last OpenGov proposal did we add budget for maintaining and improving it. And as of today, our console is the only one that supports signing with custom-signed extensions, or signing transactions directly with Ledger, Polkadot Vault, or WalletConnect. It’s also the only dev-console that can display all kinds of signed extensions present in an extrinsic. How much has the treasury spent on our developer console? €96K, while delivering significantly more.

Beyond the core libraries/SDKs, here is a quick overview of tangible outputs we’ve shipped:

On top of this, we are consistently responsive to our users, many of whom are Parity developers. We’ve written technical posts sharing learnings and made upstream contributions.

So when “Polkadot leadership” says our rates are inflated, my response is: show me one Parity team that consistently delivers comparable value on a yearly budget lower than ours. You won’t find it.

Or asked differently: if Parity handled this initiative internally, does anyone truly believe they could deliver this much value for the same spend? I don’t.

Know-how / specialization

How many developers in the ecosystem can contribute across the full Polkadot stack — low-level TS libraries, PolkadotJS, smoldot, Polkadot-SDK runtimes, the node, Chopsticks — in real terms, not in theory? I’m not sure there is a single developer inside Parity who can contribute across the stack in the way we do. Interestingly, there are teams outside Parity (e.g., within Acala) that can. That’s part of why I believe the ecosystem has more hope outside Parity than inside it.

Job stability and commitment

Funding via OpenGov is not the same as an SLA-backed contract. You never know what the governance arena will look like when you submit the next proposal. Will a new whale start calling the shots? Will DVs still value the work? Will the W3F decide to centralize control further?

To operate responsibly, you must maintain enough runway to survive delays, sometimes months, between proposals. We’re living that reality right now (more below).

Despite opportunities to pivot elsewhere, we’ve been laser-focused on Polkadot. We’ve doubled down on the ecosystem, and even invested our own resources to hire external teams to deliver valuable pieces that Polkadot needed but Parity neglected for years. We’ve put significant funds of our own into these initiatives because we genuinely want Polkadot to succeed.

Extra time and availability

While some teams (I won’t name them) inflate hours in proposals, we often do the opposite. We also ensure we never all take vacation at once, because we believe at least one of us must stay responsive when bugs appear, runtime upgrades break things, etc.

So yes, we’re going to stand our ground. Unless someone provides clear arguments for why our hourly rates (unchanged over two years despite inflation) are unjustified, we will keep defending them. Comparing our rates to Parity employee rates is simply not a valid comparison; we are not the same kind of cost structure, risk profile, or delivery model.


Everything above is context so you can understand what is happening with our latest OpenGov proposal.

Current OpenGov proposal

On November 21st, 2025, alarms went off when we discovered that @tien’s proposal had been nayed in extremis by the W3F. It didn’t make sense. We made inquiries and got worried. It looked like the W3F was moving toward full control of OpenGov: closing (some) bounties, naying Tien’s proposal based on guidelines that had just been published, and more. In the end, Tien submitted a new proposal aligned with the W3F, and it passed, but the signal was clear.

We had funding through the end of 2025, so we had to begin our next proposal quickly. In early December, we reached out to the W3F Governance Team to begin negotiations.

The first thing we were told: a hard requirement was to have Parity sign-off, and we should ensure nothing in our proposal fell outside enabling Parity’s new efforts. That was strange. Parity is an important stakeholder, but not the only one. Our stakeholders also include wallets, indexers, parachains, dapp developers, tooling teams, and more. We initially assumed that “useful for Parity” would also be broadly useful. That assumption turned out to be wrong.

We also couldn’t understand why we were being subjected to such scrutiny. We’ve tried to be a model OpenGov participant. Our proposals have passed with overwhelming support, we communicate consistently, we overdeliver, and stakeholders are happy. Was this really necessary?

We were told: “We have guidelines, we can’t make exceptions. You must comply. And the committee wants to ensure your work moving forward is aligned with Parity’s new efforts.”

Meanwhile, we had been fighting uphill for two years to improve Polkadot so decentralized applications can actually be shipped in a robust way. We repeatedly asked Parity to implement RFC-9. We repeatedly asked Parity and W3F to help promote the new (light-client-friendly) JSON-RPC APIs and define a path to deprecate the legacy ones. Not only were we ignored, Parity went as far as publicly announcing they would keep maintaining PJS with no clear plan to migrate away from legacy RPCs, when the responsible move would have been to announce a clear sunset date.

We also raised the importance of addressing critical issues (example 1, example 2, among many others). These aren’t minor. They matter if we truly want decentralized applications to succeed.

So when Parity announced a focus on “decentralized products,” the obvious question was: shouldn’t they come to us to ask what’s missing and how to enable that future? Why are we the ones pitching? Why are we being asked to narrow scope until it only matches Parity’s internal initiatives?

Nevertheless, we played along. We had countless meetings with people who did not fully understand the importance of parts of our proposal. From day one, the goal was consistent: reduce scope aggressively. Remove anything not directly related to Parity’s “Host API initiative.” “Unnecessary” (in their view) items like the Staking SDK: remove. Maintenance of the Staking Optimizer: remove. Reduce everything to the minimum three developers working full-time can deliver in one year.

So we did. We fought hard just to include modest budget to consolidate the PAPI console, because the PJS console is dying and ours is still in beta. We spent significant time educating our Parity contact about why certain pieces mattered.

After more than a month of back-and-forth, just as we finished addressing all requested changes, that Parity contact was fired. We were stunned. Does the sign-off still count? We sent the final document to the W3F Governance team anyway, because it was already January 12 and we had been operating unfunded since January 1.

We then had a meeting where they asked for clarifications (which we provided), requested small changes (which we implemented), and told us the proposal would go to the committee that makes the final decisions. We reminded them we intended to go on-chain by the end of January, because two months of negotiations was already extensive and we had been working the entirety of January unfunded. They told us they’d get back to us as soon as they had committee feedback.

So we waited.

Every few days we asked if there was news. Every time: “No, no news.” Meanwhile, the Staking Dashboard proposal went on-chain with committee green-light. That was hard to accept. Why prioritize that while we remained blocked without feedback?

Last-minute reaction and its consequences

At the end of January, after a full month of unfunded work, we restated our plan: we would submit on-chain Friday, January 30th, the deadline we had communicated from the start.

Then, the evening of Thursday, January 29th — hours before we planned to submit — we finally got committee feedback: they would not support our proposal because they considered our hourly rate too high. No justification, no explanation. They demanded a reduction to €50/hour (a 66% cut) and told us to continue negotiating until Parity provided final green-light.

We were shocked. We had been negotiating for two months, and every time we raised the hourly rate topic, we were told it was justified. The committee had recently approved Tien’s hourly rate of $125/hour, and Tien works fewer hours and at a less critical/complex part of the stack than we do. Even the team maintaining the Staking Dashboard had a higher hourly rate than what was now being demanded from us.

We replied with a polite email explaining:

  • our hourly rate had been discussed from the beginning and repeatedly affirmed,
  • we had worked all of January in good faith assuming compensation on that basis,
  • introducing a massive cut at the last minute was disingenuous and not a serious process.

We informed them we would proceed to submit the proposal anyway. If the committee changed its mind and allowed us to continue operating as proactive contributors (as we have successfully done for two years), great. If they nayed the proposal, we would not submit another in this form. We would remain open to clearly scoped work for teams that need our expertise.

The next day, while we were submitting on-chain, we received another email. Despite us stating we had reached a dead-end, they suggested they might consider €75/hour, still without any commitment.

We replied after submitting on-chain, again pointing out that this kind of last-minute re-anchoring is not conducive to good-faith negotiation. We asked them to evaluate the value we deliver and stop throwing arbitrary numbers without justification. We summarized the decision in two options:

  1. Enable the PAPI team to continue thriving and working proactively for Polkadot under a stable, reviewable process; or
  2. Watch us become a traditional consultancy and engage us only when you need specific, well-scoped work.

We also made clear that we understood from their recent communications that they’ve chosen to deprive us of the model that allowed us to operate as proactive contributors. We hope they reconsider what we see as a serious mistake, but at this point there is nothing further we can do to change that decision.


UPDATE: Just a few hours ago we received yet another email, even after asking them twice to stop sending random numbers, stating their current view is that: this scope is supportable at a cap of ~$450k total (70€/hr). Once again: without providing any rational justification for that hourly rate.

We will not accept that. We know our worth.

The future of Polkadot-API

If the W3F Governance Team doesn’t support our proposal, despite our track record and our success under the decentralized OpenGov model, where all our proposals passed with >99.99% support, and despite overwhelming stakeholder support for our current proposal (including Parity developers across teams, wallets, indexers, tooling teams, and others), the new centralized and opaque committee, enabled by the W3F’s opacity, will force us to pivot.

By removing the incentives and stability that allowed us to ship proactively, they are effectively choosing a different operating model for PAPI. Unless there is a major change of heart, we will shift toward consultancy work. And because we have to pay bills, we will also offer services to other ecosystems.

What does that mean concretely?

  • We will not abandon the project.
  • But we will stop making upstream contributions.
  • Maintenance will happen mostly in our spare time.
  • We will not be able to remain as responsive as we have been.
  • We will not be able to provide the same team support, create/maintain new SDKs, or keep improving our dApps at the same pace.

Our focus will shift to the parts that add concrete value to clients.

If Parity, the W3F, or any other Polkadot stakeholder wants to approach us with clear deliverables and well-scoped work, we can discuss directly. We may or may not reach an agreement, but at least it would be a real contract model with real clarity.


To everyone who supported us and worked with us during these two years of funding from the Polkadot DAO: thank you. It has been an honor and a privilege.

Yours truly,
PAPI Team :heart:

42 Likes

Hope you guys get the recognition you deserve. I’ve used PAPI Console and Staking Optimizer, and the experience has been really great.

3 Likes

I really appreciate the efforts to still submit OpenGov proposals knowing it wouldn’t pass due to W3F’s objection. In the future if anyone asks “who killed Polkadot?” the answer is clear, with immutable evidence on-chain. Our community members tried their best.

3 Likes

$70€/hr cap?

You should become a curator, they get paid $85 per hour for administrative tasks, with no real obligation to explain their work to the community. :joy:

Jokes aside, I wish you good luck, and hopefully W3F will eventually understand that Polkadot API is a necessary tool for the ecosystem.

4 Likes

I still remember Sub0 and Parity emphasizing a clear, product-first mindset. That’s why it’s deeply frustrating to see a lack of commitment to properly supporting a robust SDK, the very backbone of developer and user experience when entering a new ecosystem.

Instead, ongoing support continues to favor a JavaScript stack that, for many developers, has been a long-standing source of friction since day one.

Frankly, I’m disgusted.

Hopefully you can sort things out Josep.

1 Like
  1. I’m not in a position to comment on what numbers make sense right now.
  2. These guys are actual 10x engineers. Developers working in this space will know how many obstacles there are to getting things addressed, and the PAPI team always somehow manages to do it in days, and better.
  3. I understand we’re in a rough patch right now and that hard measures need to be taken. But I’m confident it can be done in a way (with mutual understanding from all parties) so we still have the best cards in our hand when the good times return.
  4. I think we’ve done enough antagonising each other (bag-holders).
  5. More transparency is desperately needed right now, people tend to “fill in the gaps” with negativity, especially during these trying times.
  6. Kill all FUDders.
6 Likes

Thank you for the transparency and for sharing this update. I believe cases like this deserve visibility because they reveal structural issues in governance and funding processes.

There is one point I want to highlight, because it goes beyond this specific proposal and affects the ecosystem as a whole: work that is executed in good faith, requested (explicitly or implicitly), and delivered during prolonged negotiation or evaluation processes should be compensated, even if the final outcome of the proposal is negative.

Delays in decision-making, shifting requirements, and extended negotiation cycles are governance costs, and those costs should not be transferred entirely to builders and contributors.

Hopefully, discussions like this can help improve funding processes to become more predictable, transparent, and aligned with ecosystem realities.

4 Likes

Hi Josep,

I appreciate the post and your request of me.

I need to ask some hard questions. For both you and W3F.

  1. Since this has been 2 years of funding, do we know number of users?
  2. Is there a lack of technical understanding occurring on W3F’s side, or is this just cost for further development?
  3. Is there a clear understanding of providing just maintenance costs to the API and not further develop the SDK workflows library until need is appropriately determined?

Infrastructure, tooling, products, which is how support flow works for maintaining and developing. In a Parity YouTube video, it said 2-week sprints within Parity would be occurring. My assumption is that it started with Smart Contract release. Are these teams putting your 2 years of work to the test right now and for this reason nothing is getting funded @W3F outside of Parity?

I understand the frustration is real, I went through it myself for many years now lol I am trying to understand if this is failure, intentional, or lack of structured guidelines?

Additionally, I know the inflation rate will be changing come March, this maybe a kick the can down the road type of event, however there are USD reserves per my understanding… Clear justifications are required, I agree with @lilymendzdev on that!

Thank you for what you did @josep the work quality and amount you did is immaculate and it would be in any ecosystem. I hope I can work with you in the future.

Unfortunately, I spoke to a lot of people that are still left in the ecosystem, but more to people that already left, and your experience is in line with everybody else I spoke to.

W3F promising something, then last minute breaking all the promises, killing the projects, leaving people with obligations hanging in the air, left with no choice or bad choice…

It happened to us. It happened to you, it happened to basically everybody.

The fact that single entity which can’t be trusted on their word and is acting this erratic, is controlling the Polkadot governance is concerning.

5 Likes

First off, I do think that PAPI is useful and well written and I voted for the proposal.

Unless someone provides clear arguments for why our hourly rates (unchanged over two years despite inflation) are unjustified, we will keep defending them

No one seems to be addressing the elephant in the room. The cost of software has supposedly dropped by 90%:

If the cost of software has dropped off a cliff then it makes sense that the hourly rate will also decrease.

For a well-maintained, well-written codebase like PAPI, it will be trivial for people to add features using AI. Perhaps this is why the web3 foundation voted against your proposal.

Thanks for the vote and for saying you find PAPI useful.

A couple of clarifications, because I think there’s a misunderstanding in your reply:

1) We never refused to negotiate rates

We have always been open to negotiating rates and/or adapting scope to fit constraints.

What actually happened is:

  • We spent ~2 months negotiating with multiple stakeholders assigned to the process.
  • Over many meetings/exchanges, we iterated on scope, budget, and conditions to match what we were asked for.
  • During those discussions we explicitly asked whether there was a hard maximum budget or hard hourly-rate cap, so we could size the scope correctly.
  • We were told there was sufficient budget to cover the team size we proposed (already reduced from our original plan), and that our rates were considered reasonable.
  • Then, late on Jan 29, after we had already been working for a full month without funding and hours before the stated “end of January” on-chain deadline, a request appeared to cut costs by ~66%, without any accompanying rationale or scope redefinition.

That’s the core issue: not “we won’t negotiate,” but “a drastic last-minute change with no reasons, no direct access to the decision-makers, and no workable path to adjust scope accordingly.”

2) “AI made software cheaper” doesn’t mean senior architecture work should be dramatically cheaper

AI can reduce the cost of routine throughput (boilerplate, straightforward features, mechanical refactors, basic glue code). But the limiting factor in projects like this usually isn’t “how fast can we type code.”

The scarce, expensive skill is architecture and boundary discipline:

  • clear ownership
  • stable interfaces
  • APIs that don’t leak
  • design choices that keep a system coherent as it grows

When boundaries are weak, AI tends to accelerate chaos (more changes, faster, with less shared understanding), not fix it. When boundaries are strong, AI can potentially increase leverage, but it still needs experienced engineers setting direction, reviewing, and protecting the design.

We’ve seen a concrete example of this dynamic over the last year in the PJS codebase: applying “AI-assisted velocity” (with actually very competent engineers IMO) on top of unclear/eroding boundaries doesn’t produce a healthier system… it tends to produce more churn, more inconsistency, and more cleanup work.

3) “For a well-maintained codebase like PAPI, it will be trivial to add features using AI”

Respectfully, this doesn’t match how complex systems evolve in practice.

Even on excellent codebases, AI is only “trivial” when:

  • the feature is shallow,
  • the design constraints are obvious,
  • and the person driving the work already understands the architecture well enough to guide tradeoffs.

Without that understanding, you can absolutely turn a clean codebase into a mess over time — not because the initial design was bad, but because changes were made without respecting the original boundaries and invariants.

Believe it or not second law of thermodynamics applies to software. The concept of “software entropy” is a well-studied phenomena.

The smoldot maintenance work is a good illustration. Yes: AI can help with small fixes and isolated changes. But once you get into genuinely complex work, what matters is not “having AI,” it’s having someone with enough seniority and architectural understanding to direct the work correctly. In practice, Parity ended up asking us to reach out to @tomaka because they needed that level of guidance to get certain pieces over the line. In other words: they were not able to get that piece of work done with their own Sr devs using different AI tools.

So the statement “well-maintained + AI = trivial incremental improvements” is complete nonsense, for complex systems and complex code-bases.

4) The irony: the “AI should make this cheaper” argument fits routine maintenance best

Ironically, the place where “this should be cheaper because AI exists” fits best is routine maintenance and incremental improvements on a functioning product, exactly the kind of work where much of the effort is throughput and small edits.

Which is why it’s a bit ironic to see $66k approved for 6 months ($132k/year) for ongoing work on the Staking Dashboard, while simultaneously arguing that specialized architectural/long-term stewardship work should be discounted heavily “because AI.”

6 Likes

I guess the elephant in the room is a magical one. The blog post you mentioned claiming a 90 percent drop in software cost comes from a personal blog where the author shares anecdotal experience, not data or industry studies, and it is hopefully just clickbait.

Even if AI coding tools can dramatically speed up some tasks, that mostly reduces accidental complexity, like repetitive coding, boilerplate, and tooling overhead, while the essential complexity of real E-type systems, such as understanding evolving requirements, designing correct architectures, integrating components safely, and ongoing maintenance, remains the main effort.

Framing speedups in implementation due to accidental complexity reduction as a 90 percent total cost drop is arbitrary fantasy. It conflates improvements in incidental work with the core hard problem of building and evolving real software.

6 Likes

I simply stated that the cost of software development has reduced due to AI, and therefore the w3f may expect reductions in proposal costs in comparison to historical costs. I don’t think this is an outrageous opinion.

1 Like

Sure, AI might cut developer hours or make you feel faster, but reality is way more nuanced than marketing hype. There are very few serious, unbiased studies, to help quantifiying the impact. For example, a relatively recent METR study on experienced open source developers found that AI actually slowed them down 19% on complex real world code, even though they felt more productive. Gains are far from guaranteed, and hourly rates depend a lot on the type of work and on the country, where outsourcing can create gaps of up to 20x and has in some industries caused long term disasters with quality, maintainability and knowledge loss. A classic example outside software is Boeing’s 787 dreamliner program, where outsourcing over 70% of production led to massive coordination breakdowns, years of delays and huge cost overruns before work had to be brought back in house to fix the problems. So yeah, AI can help with some tasks, but calling it a universal cost killer is pure fantasy.

How this all plays out in the market, fair or not, will depend on real needs, hype and the actual value provided. As Brooks put it in No Silver Bullet, there is no magic solution for the essential complexity of real software systems.

3 Likes

@christ what makes your post outrageous is that you’re speculating on two different things at once:

  1. That the cost of software development has “dropped” due to AI even for highly specialized senior developers (which is our case). That’s, at the very least, a very debatable claim, and it’s certainly not something you can assume as a blanket truth.

  2. That this (already controversial) opinion of yours also happens to be the opinion of the secret committee members hiding behind the opacity that the W3F is providing them. Please keep in mind they have never hinted that they feel this way. So we don’t know if this is the reason.

What’s the point of speculating? I can also come up with a bunch of plausible theories, like the fact that they simply dislike me because I’ve been highly critical of many of the decisions that they have taken in the recent years.

I mean, what’s the point of speculating? For all we know, the committee could be a poorly configured AI agent, who knows what’s hiding behind that curtain of opacity?

Is there anything that you know that we don’t? Did they reach out to you asking you to point out “the elephant in the room”? Are you a member of the committee?

Although, I want to point out (once again) that you are 100% missing the point: we have always been willing to negotiate. However, as we have pointed out MANY times, it’s not possible to negotiate with an opaque committee that after ~2 months of back and forth comes up with a drastic last-minute change with no reasons, no direct access to the decision-makers, and no workable path to adjust scope accordingly.

3 Likes

this entire discussion feels disconnected from reality. the amount already spent and the level of adoption simply don’t line up with the enthusiasm in this thread.

after ~$1.25M already spent, the proposal now seeks another ~$1M in a tightening market. meanwhile, other teams are operating leaner or going unfunded, while this request equates to ~$80k a month for three people with limited demonstrated adoption.

the question that matters here is: which production teams or products rely on papi today as a core dependency, and how many would be materially delayed if this funding paused? from what i can see, the answer appears to be very few, if any.

separate from the funding numbers, the papi team’s repeated toxic behavior is a real problem. public name-calling, combative replies, and repeated fights with anyone who disagrees don’t stay contained. they undermine credibility across the ecosystem and put off real builders. adoption doesn’t grow in environments like that. continuing to fund this signals that results and professionalism are optional.

discipline is warranted here. i support w3f’s nay vote and pausing recurring spend until parity starts shipping products that may actually drive adoption.

sincerely,
a “useful idiot” <3

3 Likes

A bit off-topic but felt like commenting the irony of starting with

this entire discussion feels disconnected from reality.

And ending with,

until parity starts shipping products that may actually drive adoption.

A lot of hopium there! that’s what I would actually call to be disconnected from reality. Polkadot is putting all its eggs in the same Parity basket and expecting a company that never did products before to produce a unicorn in record time, but creating products the masses will want to use is very hard. I repeat, very hard. I sincerely hope I’m proved wrong.

4 Likes

i think you are conflating two separate issues.

saying parity needs to ship products that drive adoption is not putting all eggs in one basket. in a tightening treasury environment, the entity with the deepest protocol context and distribution leverage is currently best positioned to materially move the needle. that’s just how it works.

my point, which seems to be lost in your reply, is allocating another ~$1m to papi after ~$1.25m produced no measurable shift in adoption is hard to justify.

with limited capital, allocation has to reflect probability of impact.

distractions aside, the core question remains the same. after significant treasury allocation, what measurable dependency exists today, and what materially breaks if funding pauses?

1 Like

I did mention it was off-topic, but if you ask me about the Papi team, based on their record I would say they have a good/better? chance at creating usable stuff that costs the ecosystem much less than what’s often spent inside companies like Parity?.
And yes, the Papi team does have measurable impact(just like ink had), I can’t say much for other teams but at least in our tiny ecosystem their tools help power real value producing products(like Bloque) that will be greatly affected by their lack of funding.

1 Like