Why Polkadot-API?

Thanks for engaging in this conversation, @sinzii!

This is incorrect. PAPI also caches and reuses data so repeated operations are served from cache. The primary reason PAPI performs better in the benchmark is simpler: Dedot repeatedly downloads metadata for all historical blocks, whereas PAPI only fetches metadata when needed. When we hit a block that’s not connected to the tip, we first check whether we already have its metadata and avoid redundant downloads. Re-downloading metadata every time an operation targets an old block, as Dedot currently does, incurs substantial overhead, which your own benchmark highlights.

If it helps, feel free to port the PAPI approach, or ask follow-ups about our strategy for avoiding unnecessary metadata fetches. Let’s fix the root cause rather than inventing a non-existing “memory for availability” trade-off.

Choosing to download additional data that isn’t needed, slowing things down and increasing memory pressure, doesn’t look like a good balance from a performance standpoint.

If that were strictly true, there wouldn’t be a public clearCache API.

To clarify: I’m not criticizing supporting it; I’m criticizing adopting it as Dedot’s own public signing interface. That cements long-known limitations: difficulty creating extrinsics for modern chains, heavier signers, and awkwardness around custom signed-extensions, among others. Supporting legacy interfaces for compatibility is fine; elevating them to your primary public surface is a missed opportunity for a “modern” library.

Exactly. Which is why modern libraries should converge on better, interoperable interfaces rather than re-entrench old ones.

Agreed! That’s what we’re doing. It would be great to see Dedot decouple from the PJS signer interface and join this work.

The new JSON-RPC API mega Q&A came out a few months before Dedot’s development started, alongside with other posts and public discussions. Library authors have a responsibility to track and leverage upstream changes, precisely to avoid repeating past mistakes.

It’s more opinionated than it appears, which makes it non-interoperable and coupled to Dedot internals. Neither generic JSON-RPC nor the modern Polkadot JSON-RPC define a standard “subscribe” primitive; subscriptions are conventions built on notifications. Returning a Promise from your subscribe function also leaks an implementation detail about how Dedot internally manages subscriptions.

By contrast, the JSON-RPC interface we proposed is deliberately minimal and library-agnostic. Smoldot’s public interface for Chains is another interoperable example. You can translate between smoldot’s and PAPI’s easily (because they are both interoperable).

In fact, we could use any PAPI provider with Dedot’s modern client by “impersonating” a smoldot chain. The reverse isn’t feasible with Dedot’s current provider API.

If you don’t like PAPI’s interface, proposing a simpler/better interoperable one (or adopting smoldot’s) would still be a win. We’re not fully satisfied with smoldot’s because disconnects require rejecting the latest Promise, it limits synchronous message bursts, and it’s harder to ensure consumers drain all yielded promises. Even so, we’d be willing to compromise on it.

This interface is missing several strengths the modern JSON-RPC provides: subscribing to all finalized blocks, tracking current best blocks, automatic recovery across reconnections, etc.

And practically speaking, the modern client isn’t usable with Dedot today. As I wrote above:

This is exactly how a “modern” library inadvertently reinforces legacy JSON-RPC usage. Offering two clients, where one works and the other doesn’t, pushes users toward the legacy path. This is a solved problem: you can deliver a solid DX on the modern client.

It will. A recent example: at Polkadot People block-height 2188447, the structure of the Identity.IdentityOf storage-value changed. Dedot can’t detect these structural changes on runtime upgrades on the fly. Without a compatibility API, tools like https://diff.papi.how aren’t achievable. We’ve explained this in multiple places; denying it doesn’t make the limitation go away.

Regarding the JSON-RPC Provider. We propose using a super simple and minimalistic API, although for v2 we have realized that it’s slightly better if the payloads are actually parsed, which is better for performance. They are essentially the same, though.

We’re open to either, or to a third option, so long as it’s simple, performant, and decoupled from any single library’s internals.

That agreement exists. The new interface we proposed is being added into PJS.

1 Like