Can pallet revive Help Improve Polkadot's UX?

TL;DR
Building DApps on Polkadot can result in suboptimal UX due to the lack of atomic multi-call flows where later actions depend on the state changes from earlier ones. Could pallet-revive (or more generally, smart contracts with access to runtime pallets) help solve this?


The Problem: Chained Transactions Kill UX

One of the biggest pain points when developing DApps on Polkadot is the inability to perform certain actions atomically. Often, we have to:

  1. Submit a transaction.
  2. Wait for its block-inclusion (and sometimes finalization).
  3. Extract some state from it (or read some events).
  4. Submit a second transaction.

This flow introduces friction and complexity that significantly harms user experience.


A Real Example: KSM RFP #2 RFP-launcher Dapp

While building the RFP-launcher, we needed to do the following in one go:

  • Create a bounty.
  • Submit a referendum for that bounty.
  • Plus a few other related calls.

Ideally, all of this would happen in one transaction. But here’s what we ran into:

Option 1: Guessing the Bounty ID

Since bounty IDs are incrementally assigned, we could try to guess the next ID and use it in the same batch. Most of the time this works — but if someone else creates a bounty just before our transaction is included, everything breaks: we end up referencing the wrong bounty in the referendum.

Technically, we could detect this in the DApp and guide the user through a recovery flow. But that’s still a terrible experience.

Option 2: Sequential Transactions

Instead, we create the bounty in one transaction, wait for it to be included, extract the ID from the emitted event, and then send a second transaction with the rest of the operations. Safer, but introduces delay, complexity, and additional signing — not great UX either.


This Isn’t Just About Bounties…

This pattern pops up all over Polkadot: you want to conditionally chain actions where subsequent calls depend on freshly-updated state. Current options are either unsafe or clunky.


Can pallet-revive solve this?

If smart contracts deployed through pallet-revive could tap into other pallets functionalities, we could deploy a contract that:

  • Creates the bounty.
  • Grabs the real ID immediately.
  • Submits the referendum referencing the correct bounty — all in one atomic call.

This would open up far more powerful transaction flows than what’s currently feasible today.


Is This Possible Today?

I might be wrong here, but my assumption is this isn’t currently viable due to how contracts and pallets differ:

  • FRAME pallets: Use weight-based fees estimation (run on WASM).
  • Contracts (e.g., Ink! via revive): Use gas-based metering (run on PVM).

This separation likely prevents contracts from invoking pallet logic directly. I could be wrong, though… I really wish to be wrong! But — if Ink! (on pallet-revive) offered a safe interface to tap into other pallets, it could be a game-changer.


Is Anyone Exploring This?

Is there ongoing work in this direction? Could a smart contract system bridge this gap to allow conditional flows without race conditions or multi-transaction UX hurdles?

:folded_hands: Would love to hear if someone is exploring this or if there are design discussions happening around it.

cc: @Alex @alejandro @peterw


PS: Yes, I’m aware that in the bounty case we could theoretically refactor things — e.g., by making the bounty ID deterministic (like, being a hash of the bounty info). But that’s a major refactor and doesn’t generalize well. The core issue remains: we need a clean way to express “do X, then Y based on X’s result” without breaking the UX.

8 Likes

tl:dr: Yes this will be possible.

That said, every runtime functionality needs to be made available manually to contracts by writing a pre-compile. A pre-compile is essentially an Solidity interface to a pallet or any other functionality in the runtime. Currently, we are planning to have them for assets, governance, XCM and staking. But adding new pre-compiles can be accomplished via a runtime upgrade.

Runtime code is unmetered. It is the pre-compiles job to charge enough weight from the contract’s transaction remaining gas so that the operation is safe.

Would be great to have an example of this, or similar, our documentation.

1 Like

Once we expose these features as precompiles, not only can you perform them atomically through a contract call, but you can also interact with them using any Ethereum frontend library. If you do this through a library like viem.sh, you’ll even get type safety from the TypeScript types inferred from the generated Solidity ABIs.

Note that a chain could integrate pallet-revive solely to enable these interactions—contract deployment and instantiation can be disabled in the pallet’s configuration.

I have so many questions…

  1. Are these precompiles tightly coupled to Solidity? Would Ink! and/or other languages that can compile to PVM be able to leverage these precompiles?
  1. Are these already deployed on Westend? If they aren’t… is there an ETA? or can I try them somehow on a local/development chain?

  2. Would it be possible to see an example of a solidity (or Ink!) contract that atomically does something as simple as what I described in here :folded_hands:?

  3. How will these pre-compiles deal with runtime-upgrades? Will all contracts that reference the “old” pre-compiles break when there is a runtime-upgrade that significantly changes interfaces and/or performs a state-migraion? :thinking:

  1. For real? Do you really think that it will be possible to create safely-typed XCM interactions using Types-Script thanks to the Solidity ABIs that these precompiles will produce? I have a very hard time believing that… But I would love to be proven wrong, of course! Do you have an example of this :folded_hands:?

Precompiles will use the Solidity ABI just like any other contracts
As long as your contract’s language can encode / decode solidity ABI, you should be good to go. In Rust you can use the alloy crate to do that.

We are planning to release pallet-asset and xcm precompile in 2506, Kian is working on staking precompiles here

How will these pre-compiles deal with runtime-upgrades? Will all contracts that reference the “old” pre-compiles break when there is a runtime-upgrade that significantly changes interfaces and/or performs a state-migraion? :thinking:

The interface implemented by precompiles needs to be backward compatible, so it does not break existing contracts that use it, you can update the implementation that live in the runtime though

Obviously the type system in Solidity is not as rich as in Rust, so most likely what you will do is expose most used XCM operations as interface method, and build the XCM inside these precompiles

Thank you for bringing up this topic Josep, this is exactly what we have been working on with Pop Network via our treasury proposal.

To start, here you can find an example contract which interacts with the NFTs pallet which you can deploy on our testnet, which queries the next collection id then creates a collection, all in one contract message. A contract message can interact with the runtime as much as it wants, queries or execution of extrinsics, but of course it has to respect the weight it uses and thus the fees.

We are currently working on converting our Pop API, which is using a Chain Extension, to a Precompile and make it ready for pallet revive. Note that there is a difference in how we have designed the Pop API, one versioned “precompile” for the entire runtime, compared to the precompiles that will be deployed on Polkadot Hub which will be a precompile for each use case (interesting post about this here). In the upcoming days we will explain what we have built for Pop Network and why in more detail.

To the rest of your questions.

  1. No, any contract that can compile to PVM will be able to interact with these precompiles. However, precompiles have to use the sol ABI and thus the contracts interacting with that too. Please correct me if I’m wrong. ink! will be compatible with the sol ABI.

  2. Westend I’m not sure but you can deploy the contract shared above on Pop Network’s testnet.

  3. “”

  4. As mentioned in @Cyrill’s post here, precompiles should never change interface, in stead a new one has to be created if interfaces want to be changed. On the contrary, Pop API’s thinking has been to create a versioned interface.

On making the Polkadot Hub a rich platform in which to create smart contract based solutions, I invite everyone to check the list of pallets that these upcoming contracts will coexist with. I think the community should feel empowered to create pre-compiles that cover not only the pallets initially planned by Parity (assets, nfts, governance) but also the other pallets in that list that may be useful for contracts, and beyond that, suggesting and building pallets that the Polkadot Hub will need to compete in the wider blockchain industry.

4 Likes

Why not ink! ?

Ink! will use this kind of crate behind the scene to encode / decode messages that’s what I meant

1 Like

I totally get the importance of backwards compatibility when exposing precompiles. But I still find it hard to see how this would work reliably in practice.

Polkadot pallets are constantly evolving. They migrate state, introduce new abstractions, move their state across chains, and rarely (if ever) aim for stable or standardized external APIs. So how can we realistically expect a set of precompiles, which are bound to those pallets, to remain stable and backwards compatible over time?

In practice, there seem to be two unappealing options:

  1. Expose only a tiny, minimal interface – so minimal that it’s barely useful.
  2. Expose more meaningful functionality, but then risk coupling to internal implementation details and facing compatibility issues when things inevitably change.

Take the staking precompiles that Kian is working on as an example. They look very clean and stable - but also extremely minimalistic. So minimal, in fact, that you wouldn’t be able to build anything like a proper staking dashboard on top of them.

For instance:

  • How would a nominator know when they need to rebag themselves (or put themselves in front of another nominator) in order to earn rewards?
  • How can they evaluate validator performance before nominating?
  • How do they know when the next reward payout is happening?
  • What about accessing basic stats like total DOT staked, average reward rate, or validator oversubscription?

All of this requires rich, stateful insight, but you can’t feasibly expose that through Solidity-based precompiles… not without some sort of escape hatch to have direct access to state outside pallet-revive. However, that’s completely unfeasible b/c the state outside pallet-revive is SCALE-encoded, so a Solidity interface can’t provide such a scape hatch… :person_shrugging:

And therein lies the fundamental limitation: if we want precompiles that do anything meaningful (beyond ERC20-style standards), we run into serious limitations due to encoding formats, evolving pallet logic, and the lack of stable interfaces.

So, circling back to the original question — can pallet-revive help improve FRAME-specific UX flows? — I’d say, unfortunately, the answer is a pretty clear no, at least for now.

That said, I’m absolutely in favor of building a few rock-solid, standardized precompiles - especially for asset interoperability with Ethereum standards (ERC-20, ERC-721, etc.). That’s crucial and highly valuable.

But I think it’s unrealistic to expect this approach to scale to complex, dynamic, state-driven flows that many real-world DApps need.

Hopefully, there will come a day when the whole runtime will run on PVM, and perhaps that day we will be able to have composable contracts that can natively interact with the rest of the chains functionality… A man can dream!:crossed_fingers:

1 Like

You are just asking for two conflicting things: Expose a very rich detailed API surface which is absolutely stable. It is just not possible. You have to pick one. Contracts picks the latter one. And just because it doesn’t expose every little detail it doesn’t mean it is useless. You can always cautiously expand the functionality. Evolving it is no problem as long as we don’t change existing behaviour.

The VM our runtime runs on has absolutely NOTHING to do with it. It is a low level detail that is completly opaque to off chain code.

I think Josep’s sentence about the “two unappealing options” wasn’t a wish-list of both things; he meant that each option, on its own, is unattractive:

  1. Expose only a tiny, minimal interface: Great for safety, but not so great for builders. If the precompile shows very little, contracts still need kludgy off-chain workarounds or multi-tx flows—exactly the UX pain he was highlighting.
  2. Expose richer functionality via precompiles: Great for short-term Solidity onboarding, dangerous for Polkadot’s long-term evolution. Because precompile ABIs must stay frozen, every future runtime breakthrough would have to preserve yesterday’s interface—or fragment state across new addresses. Cyrill summed this up well in his post (“we do not break contract-space”).

Polkadot’s edge has always been its freedom to evolve rapidly at the protocol layer; locking system pallets behind immutable ABIs risks dulling that edge.


The problem

To me, the technical debate points to a deeper problem: communication

  • A while back the community agreed on the Plaza strategy— Polkadot pallets + EVM hub. Back then nobody could map every downstream consequence.
  • Inside Parity they most probably have been wrestling with real-world constraints, trade-offs, and deadlines. From the outside, most of us only glimpse the finished decision: “Polkadot Hub will launch with NFTs, tokens, XCM and governance precompiles.”
  • By the time external teams understand the fallout—UX trade-offs, ABI lock-in, economic impacts—those decisions already feel baked.

In short: the rest of the ecosystem can’t help course-correct if we don’t see the course map until the ship has sailed.


A way forward

Polkadot is a decentralised network of teams—from one-person dev shops to VC-backed companies—who all care deeply about its success. If we treat Hub design as an open RFC process instead of an internal deliverable, we can:

  • Crowd-source edge-case feedback before ABIs freeze.
  • Share the rationale behind tough calls (security, resourcing, timelines).
  • Align the Hub’s feature set with real DApp builders’ needs, not just our guesses.

Polkadot Hub can be a huge win—if we keep the conversation two-way. Together we should hammer out which precompiles truly belong in Hub v1, which can stay experimental (lets use Paseo and Kusama!!!), and how we’ll revisit the set without kneecapping innovation down the road.

1 Like