Can pallet revive Help Improve Polkadot's UX?

TL;DR
Building DApps on Polkadot can result in suboptimal UX due to the lack of atomic multi-call flows where later actions depend on the state changes from earlier ones. Could pallet-revive (or more generally, smart contracts with access to runtime pallets) help solve this?


The Problem: Chained Transactions Kill UX

One of the biggest pain points when developing DApps on Polkadot is the inability to perform certain actions atomically. Often, we have to:

  1. Submit a transaction.
  2. Wait for its block-inclusion (and sometimes finalization).
  3. Extract some state from it (or read some events).
  4. Submit a second transaction.

This flow introduces friction and complexity that significantly harms user experience.


A Real Example: KSM RFP #2 RFP-launcher Dapp

While building the RFP-launcher, we needed to do the following in one go:

  • Create a bounty.
  • Submit a referendum for that bounty.
  • Plus a few other related calls.

Ideally, all of this would happen in one transaction. But here’s what we ran into:

Option 1: Guessing the Bounty ID

Since bounty IDs are incrementally assigned, we could try to guess the next ID and use it in the same batch. Most of the time this works — but if someone else creates a bounty just before our transaction is included, everything breaks: we end up referencing the wrong bounty in the referendum.

Technically, we could detect this in the DApp and guide the user through a recovery flow. But that’s still a terrible experience.

Option 2: Sequential Transactions

Instead, we create the bounty in one transaction, wait for it to be included, extract the ID from the emitted event, and then send a second transaction with the rest of the operations. Safer, but introduces delay, complexity, and additional signing — not great UX either.


This Isn’t Just About Bounties…

This pattern pops up all over Polkadot: you want to conditionally chain actions where subsequent calls depend on freshly-updated state. Current options are either unsafe or clunky.


Can pallet-revive solve this?

If smart contracts deployed through pallet-revive could tap into other pallets functionalities, we could deploy a contract that:

  • Creates the bounty.
  • Grabs the real ID immediately.
  • Submits the referendum referencing the correct bounty — all in one atomic call.

This would open up far more powerful transaction flows than what’s currently feasible today.


Is This Possible Today?

I might be wrong here, but my assumption is this isn’t currently viable due to how contracts and pallets differ:

  • FRAME pallets: Use weight-based fees estimation (run on WASM).
  • Contracts (e.g., Ink! via revive): Use gas-based metering (run on PVM).

This separation likely prevents contracts from invoking pallet logic directly. I could be wrong, though… I really wish to be wrong! But — if Ink! (on pallet-revive) offered a safe interface to tap into other pallets, it could be a game-changer.


Is Anyone Exploring This?

Is there ongoing work in this direction? Could a smart contract system bridge this gap to allow conditional flows without race conditions or multi-transaction UX hurdles?

:folded_hands: Would love to hear if someone is exploring this or if there are design discussions happening around it.

cc: @Alex @alejandro @peterw


PS: Yes, I’m aware that in the bounty case we could theoretically refactor things — e.g., by making the bounty ID deterministic (like, being a hash of the bounty info). But that’s a major refactor and doesn’t generalize well. The core issue remains: we need a clean way to express “do X, then Y based on X’s result” without breaking the UX.

12 Likes

tl:dr: Yes this will be possible.

That said, every runtime functionality needs to be made available manually to contracts by writing a pre-compile. A pre-compile is essentially an Solidity interface to a pallet or any other functionality in the runtime. Currently, we are planning to have them for assets, governance, XCM and staking. But adding new pre-compiles can be accomplished via a runtime upgrade.

Runtime code is unmetered. It is the pre-compiles job to charge enough weight from the contract’s transaction remaining gas so that the operation is safe.

Would be great to have an example of this, or similar, our documentation.

1 Like

Once we expose these features as precompiles, not only can you perform them atomically through a contract call, but you can also interact with them using any Ethereum frontend library. If you do this through a library like viem.sh, you’ll even get type safety from the TypeScript types inferred from the generated Solidity ABIs.

Note that a chain could integrate pallet-revive solely to enable these interactions—contract deployment and instantiation can be disabled in the pallet’s configuration.

I have so many questions…

  1. Are these precompiles tightly coupled to Solidity? Would Ink! and/or other languages that can compile to PVM be able to leverage these precompiles?
  1. Are these already deployed on Westend? If they aren’t… is there an ETA? or can I try them somehow on a local/development chain?

  2. Would it be possible to see an example of a solidity (or Ink!) contract that atomically does something as simple as what I described in here :folded_hands:?

  3. How will these pre-compiles deal with runtime-upgrades? Will all contracts that reference the “old” pre-compiles break when there is a runtime-upgrade that significantly changes interfaces and/or performs a state-migraion? :thinking:

  1. For real? Do you really think that it will be possible to create safely-typed XCM interactions using Types-Script thanks to the Solidity ABIs that these precompiles will produce? I have a very hard time believing that… But I would love to be proven wrong, of course! Do you have an example of this :folded_hands:?

Precompiles will use the Solidity ABI just like any other contracts
As long as your contract’s language can encode / decode solidity ABI, you should be good to go. In Rust you can use the alloy crate to do that.

We are planning to release pallet-asset and xcm precompile in 2506, Kian is working on staking precompiles here

How will these pre-compiles deal with runtime-upgrades? Will all contracts that reference the “old” pre-compiles break when there is a runtime-upgrade that significantly changes interfaces and/or performs a state-migraion? :thinking:

The interface implemented by precompiles needs to be backward compatible, so it does not break existing contracts that use it, you can update the implementation that live in the runtime though

Obviously the type system in Solidity is not as rich as in Rust, so most likely what you will do is expose most used XCM operations as interface method, and build the XCM inside these precompiles

1 Like

Thank you for bringing up this topic Josep, this is exactly what we have been working on with Pop Network via our treasury proposal.

To start, here you can find an example contract which interacts with the NFTs pallet which you can deploy on our testnet, which queries the next collection id then creates a collection, all in one contract message. A contract message can interact with the runtime as much as it wants, queries or execution of extrinsics, but of course it has to respect the weight it uses and thus the fees.

We are currently working on converting our Pop API, which is using a Chain Extension, to a Precompile and make it ready for pallet revive. Note that there is a difference in how we have designed the Pop API, one versioned “precompile” for the entire runtime, compared to the precompiles that will be deployed on Polkadot Hub which will be a precompile for each use case (interesting post about this here). In the upcoming days we will explain what we have built for Pop Network and why in more detail.

To the rest of your questions.

  1. No, any contract that can compile to PVM will be able to interact with these precompiles. However, precompiles have to use the sol ABI and thus the contracts interacting with that too. Please correct me if I’m wrong. ink! will be compatible with the sol ABI.

  2. Westend I’m not sure but you can deploy the contract shared above on Pop Network’s testnet.

  3. “”

  4. As mentioned in @Cyrill’s post here, precompiles should never change interface, in stead a new one has to be created if interfaces want to be changed. On the contrary, Pop API’s thinking has been to create a versioned interface.

On making the Polkadot Hub a rich platform in which to create smart contract based solutions, I invite everyone to check the list of pallets that these upcoming contracts will coexist with. I think the community should feel empowered to create pre-compiles that cover not only the pallets initially planned by Parity (assets, nfts, governance) but also the other pallets in that list that may be useful for contracts, and beyond that, suggesting and building pallets that the Polkadot Hub will need to compete in the wider blockchain industry.

6 Likes

Why not ink! ?

Ink! will use this kind of crate behind the scene to encode / decode messages that’s what I meant

1 Like

I totally get the importance of backwards compatibility when exposing precompiles. But I still find it hard to see how this would work reliably in practice.

Polkadot pallets are constantly evolving. They migrate state, introduce new abstractions, move their state across chains, and rarely (if ever) aim for stable or standardized external APIs. So how can we realistically expect a set of precompiles, which are bound to those pallets, to remain stable and backwards compatible over time?

In practice, there seem to be two unappealing options:

  1. Expose only a tiny, minimal interface – so minimal that it’s barely useful.
  2. Expose more meaningful functionality, but then risk coupling to internal implementation details and facing compatibility issues when things inevitably change.

Take the staking precompiles that Kian is working on as an example. They look very clean and stable - but also extremely minimalistic. So minimal, in fact, that you wouldn’t be able to build anything like a proper staking dashboard on top of them.

For instance:

  • How would a nominator know when they need to rebag themselves (or put themselves in front of another nominator) in order to earn rewards?
  • How can they evaluate validator performance before nominating?
  • How do they know when the next reward payout is happening?
  • What about accessing basic stats like total DOT staked, average reward rate, or validator oversubscription?

All of this requires rich, stateful insight, but you can’t feasibly expose that through Solidity-based precompiles… not without some sort of escape hatch to have direct access to state outside pallet-revive. However, that’s completely unfeasible b/c the state outside pallet-revive is SCALE-encoded, so a Solidity interface can’t provide such a scape hatch… :person_shrugging:

And therein lies the fundamental limitation: if we want precompiles that do anything meaningful (beyond ERC20-style standards), we run into serious limitations due to encoding formats, evolving pallet logic, and the lack of stable interfaces.

So, circling back to the original question — can pallet-revive help improve FRAME-specific UX flows? — I’d say, unfortunately, the answer is a pretty clear no, at least for now.

That said, I’m absolutely in favor of building a few rock-solid, standardized precompiles - especially for asset interoperability with Ethereum standards (ERC-20, ERC-721, etc.). That’s crucial and highly valuable.

But I think it’s unrealistic to expect this approach to scale to complex, dynamic, state-driven flows that many real-world DApps need.

Hopefully, there will come a day when the whole runtime will run on PVM, and perhaps that day we will be able to have composable contracts that can natively interact with the rest of the chains functionality… A man can dream!:crossed_fingers:

1 Like

You are just asking for two conflicting things: Expose a very rich detailed API surface which is absolutely stable. It is just not possible. You have to pick one. Contracts picks the latter one. And just because it doesn’t expose every little detail it doesn’t mean it is useless. You can always cautiously expand the functionality. Evolving it is no problem as long as we don’t change existing behaviour.

The VM our runtime runs on has absolutely NOTHING to do with it. It is a low level detail that is completly opaque to off chain code.

I think Josep’s sentence about the “two unappealing options” wasn’t a wish-list of both things; he meant that each option, on its own, is unattractive:

  1. Expose only a tiny, minimal interface: Great for safety, but not so great for builders. If the precompile shows very little, contracts still need kludgy off-chain workarounds or multi-tx flows—exactly the UX pain he was highlighting.
  2. Expose richer functionality via precompiles: Great for short-term Solidity onboarding, dangerous for Polkadot’s long-term evolution. Because precompile ABIs must stay frozen, every future runtime breakthrough would have to preserve yesterday’s interface—or fragment state across new addresses. Cyrill summed this up well in his post (“we do not break contract-space”).

Polkadot’s edge has always been its freedom to evolve rapidly at the protocol layer; locking system pallets behind immutable ABIs risks dulling that edge.


The problem

To me, the technical debate points to a deeper problem: communication

  • A while back the community agreed on the Plaza strategy— Polkadot pallets + EVM hub. Back then nobody could map every downstream consequence.
  • Inside Parity they most probably have been wrestling with real-world constraints, trade-offs, and deadlines. From the outside, most of us only glimpse the finished decision: “Polkadot Hub will launch with NFTs, tokens, XCM and governance precompiles.”
  • By the time external teams understand the fallout—UX trade-offs, ABI lock-in, economic impacts—those decisions already feel baked.

In short: the rest of the ecosystem can’t help course-correct if we don’t see the course map until the ship has sailed.


A way forward

Polkadot is a decentralised network of teams—from one-person dev shops to VC-backed companies—who all care deeply about its success. If we treat Hub design as an open RFC process instead of an internal deliverable, we can:

  • Crowd-source edge-case feedback before ABIs freeze.
  • Share the rationale behind tough calls (security, resourcing, timelines).
  • Align the Hub’s feature set with real DApp builders’ needs, not just our guesses.

Polkadot Hub can be a huge win—if we keep the conversation two-way. Together we should hammer out which precompiles truly belong in Hub v1, which can stay experimental (lets use Paseo and Kusama!!!), and how we’ll revisit the set without kneecapping innovation down the road.

3 Likes

Not exactly. I wasn’t asking for that — I was pointing out that both of the currently available options are unappealing, as I wrote explicitly:

Let me clarify my point further:
What I’m trying to highlight is something that, quite frankly, should be obvious: there’s too much of an impedance mismatch between Solidity-based contracts and the current FRAME based runtime. That mismatch makes it extremely unlikely that Solidity contracts will ever interact meaningfully with complex pallet logic (e.g., staking, governance, etc.).

Sure, precompiles can be very helpful for exposing basic asset-level functionality, and that’s nice and all… but let’s not fool ourselves pretending that these precompiles can also fully support complex, stateful flows like staking or governance.

I remain especially skeptical about how XCM-related precompiles will pan out… but I’ll withhold judgment until I see something concrete.

That’s a bit of a straw man. I never said “every little detail.” I actually mentioned basic, essential staking operations… like rebagging, reward prediction, validator analytics (so that nominators can make informed decisions when nominating), era information, etc. These aren’t edge cases. They’re fundamental building blocks of any real staking UX.

If you believe the current precompiles are sufficient to build a meaningful EVM-based staking DApp on Polkadot, then please have someone build one and show it to me. I would love to be proven wrong and stand corrected.

Can you, though? The more functionality you expose, the more brittle the interface becomes… and the harder it is to evolve the underlying pallets without introducing breaking changes. This is particularly risky for areas like governance and staking, which are still actively evolving.

I’m not trying to be contrarian here, I’m just trying to be realistic. My concern isn’t about technical feasibility alone, but about maintainability and practical developer experience. Right now, I don’t see a clear path for Solidity contracts to deeply integrate with Polkadot’s ever-evolving runtimes.

Time will tell, of course, and I genuinely hope I’m proven wrong.

1 Like

Sure as hell sounds like you are. I don’t see any constructive criticism here. Just a rant how everything is unappealing and how it will never work out because of some “impedance mismatch”. Would really appreciate to have a discussion with you and not ChatGPT.

The pre-compiles are not meant to be a replacement for off chain code. They are meant to facilitate things like a DAO and such. And if you see them lacking important APIs please comment in the PR with some constructive feedback.

2 Likes

:thinking: Ok, so perhaps there have been a misunderstanding here…

I was under the impression that the goal was to empower Ethereum DApp devs to use their usual tooling (Viem, or Etherjs, or Web3js, etc.) to interact directly with Polkadot AssetHub. E.g. build a staking DApp entirely with Ethereum tools.

What I’m hearing now is more like: since Solidity contracts can’t expose every bit of functionality you’d need for a full-featured DApp on AssetHub, Ethereum DApp devs will have to either:

  1. Mix their Ethereum tooling with non-Ethereum tools to bridge the gaps,
  2. Lean heavily on indexers/aggregators,
  3. Or most likely, do both.

Is that a fair summary of what you’re saying?

If that is not what you are saying, then could you please put yourself in the shoes of an Ethereum DApp developer who wants to build on Polkadot AssetHub, and explain how they will manage to build a DApp which interacts with Polkadot’s core functionalities? :folded_hands:

1 Like

You are very welcome to suggest new features and to contribute to the code. We are very busy up to September but if you have concrete asks please state them and ideally add a PR.

Polkadot is late to the EVM game. After we push the fist version to Polkadot, we will open up the process more. There is a long wishlist of things which are not in the V1 and collaboration is the way to decide what goes in the V2: from performance improvements, which precompiles, enhanced compatibility etc

Pierre

Nit: APIs are rarely perfect and even less when they need to keep compatibility with the past. @Alex did explain well what it is the case.

Maybe you could help and give us a list of things that should happen in order of priority to be a “good platform” for an Eth Dapp dev?

1 Like

The most important goal is to allow Ethereum native Apps to run with minimal changes on the PolkadotHub. And yes, those can use their existing javascript tooling. I don’t think people already in the Polkadot ecosystem should migrate away from Papi or Polkadot.js in order to build a staking Dapp.

That said, my hope is that the staking pre-compile exposes enough functionality to build a staking Dapp using Viem if people want to. Not sure if that can be done from the got go or we need to iterate. If you have insight into which functions are missing please speak up now.

No. I mean that we need to think hard of how level level the pre-compile API can be so we can guarantee its stability. And this is probably not possible in the first deliverable. The earlier we get feedback the better (from you for example). We should expose as much as we have to and as little as we can to still allow writing a useful frontend. This includes simple things like error codes: In the pre-compile we should just map them to success/failure since it will be hard to keep them stable. Error handling should not happen on-chain.

I can’t predict all the possible use cases. And I don’t have time to put myself in a builders shoes, sorry. I rely on feedback. Therefore what I can do is start with the minimal API I can guarantee to be stable. And add more stuff on demand. The last thing we want is top break existing Apps. As a matter of fact this will be considered a bug.

1 Like

:warning: DISCLAIMER
I’ve been accused of using ChatGPT to generate my replies in this post. Just to be crystal clear: every idea, argument, and point made in my responses is mine. I do use ChatGPT to help me rephrase my thoughts, not to generate them. English isn’t my first language, and I tend to write in a very direct, to-the-point, and sometimes confrontational way that some people perceive as aggressive.

Frankly, I use ChatGPT to soften the tone so that oversensitive people (aka “snowflakes”) can focus on the what instead of getting distracted by the how.

If tone matters to you more than substance, then feel free to copy-paste what’s coming below into ChatGPT and ask:

“Can you rephrase this so that it doesn’t come across as aggressive?”

That’s literally what I’ve been doing.

You’ve been warned. If you choose not to do that, please don’t come back whining about tone. Let’s keep the focus where it belongs: on the actual arguments.


This right here is the heart of the problem. If you’re not willing to think like the people who actually use what you are building, then you’re building in a vacuum. From your ivory tower it might feel like you’re crafting a masterpiece, but without walking a mile in a DApp developer’s shoes, you’re almost guaranteed to miss the mark.

Put simply: stop designing for hypotheticals and start designing for your users. If you can’t (or won’t) empathize with the builders, then don’t be surprised when your “solutions” end up gathering dust.

I surely can! Gosh, I’m glad you asked…

Step one (and this really shouldn’t be controversial) is to define the scope of what should realistically be achievable using only Ethereum tooling and Solidity smart contracts. Because let’s be honest: an Ethereum DApp built with Ethereum tools alone (and using their wallets) is never going be able to leverage the full potential of Polkadot.

And there’s one especially glaring limitation that comes to mind: multi-chain DApps.

You see, Ethereum tooling was never designed with multi-chain DApps in mind. I’m sure you’ve tried your fair share of Ethereum DApps (like me) and you’ve probably noticed the moment your wallet isn’t connected to the “right” network, the DApp immediately nags you to switch.

That’s not a coincidence. Most Ethereum DApps rely on wallets as their primary JSON-RPC provider, and those wallets follow the EIP-1193 spec, which includes things like the chainChanged event. In practice, this means your provider is tied to a single chain at any given moment.

Now, sure technically you could build a multi-chain DApp with Ethereum tooling. You’d spin up multiple providers, use custom infrastructure, and juggle a few headaches. But when it’s time to actually sign a transaction… ups! you’re back to using the wallet provider… and that’s where things become a massive PITA.

Anyhow, all of that is actually a moot point. Because realistically, not every Polkadot parachain is going to implement pallet-revive, spin up the extra infrastructure needed to support the Ethereum JSON-RPC endpoints, and expose a full set of Solidity precompiles tailored to their own custom runtime logic. So let’s be real: this isn’t happening in the foreseeable future.

Also worth noting: no one in Ethereum land is working on multi-chain DApp tooling either.

So yeah, I think it’s fair to draw the line and say: multi-chain DApps are out of scope for anything that sticks to Ethereum tooling. If you’re targeting cross-chain functionality: Coretime, Hydration, People Chain, AssetHub… you name it, then you’re gonna need Polkadot-specific tools. Period.

Let that sink in for a second.

If you’re building a DApp that touches multiple parachains, you’re already forced to reach for Polkadot-specific tools. At that point, why bother dragging Ethereum tooling into the mix? It becomes pure overhead.

And yes, eventually PAPI will offer a Solidity SDK, the same way we have Ink! SDK. But let’s be honest -if I were building a serious multi-chain DApp, I’d much rather use Ink! contracts. So, I think that it’s a bit sad that Ink! is being treated as a second class citizen, but I digress, sorry.

So, I think it’s safe to say we’ve ruled out multi-chain DApps. Which brings us to the next obvious question:

What else should be out of scope?

Excellent question, @pierreaubert. So glad you asked!

It’s pretty obvious to me that anything outside of asset management should be completely out of scope. At least for now. For real: keep the API surface of these precompiles as small as humanly possible.

I strongly recommend excluding everything that isn’t directly tied to asset handling. Focus only on making Polkadot assets compliant with established Ethereum standards like ERC-20, ERC-721, etc. Nothing else. Pick the standards you want to support, and then very carefully implement the minimal interface required to meet those standards, and only those standards.

Once that’s done, the next step isn’t “what else can we cram in?”—it’s proving that this minimal approach actually works. And please, for the love of all that is modular, don’t try to expand anything until you’ve validated that core hypothesis.

But how do we prove it worked?

I’m so glad you asked, @pierreaubert. You’re really on a roll today.

Here’s one idea: deploy a well-known Ethereum DApp as-is, without any rewrites, on Westend AssetHub. Pick something that’s asset-heavy and leans fully into the standards you’ve chosen to support. A perfect example would be deploying the open-source contracts and frontend of Uniswap into Westend AH.

Have someone at Parity do it (or better yet, outsource it to a real-world Ethereum team to get a more honest take on DX and compatibility). If that DApp can be deployed and used without major compromises, then… CONGRATS! you’ve got a win. Only then should expanding the precompiles even be on the table.

But of course, that leads to the real question:

How should precompiles that touch the rest of Polkadot’s core functionality be designed?

Yet another great question, @pierreaubert! You’re on fire today.

Let’s be honest: these precompiles are going to have far-reaching consequences. Once introduced, they will severely constrain how things can evolve, because backwards compatibility becomes a hard requirement.

Given that, I’m afraid that there’s only one sane path forward:
Start by carefully and methodically defining a set of standardized interfaces, based on real, long-term use cases… interfaces that are thoughtfully discussed, scrutinized, and agreed upon by the community of experts (and just to be super clear: I’m not referring to protocol developers at Parity living in their comfy ivory towers).

And look, I’ll just say it: this is something that Polkadot has historically struggled with. We’ve never had a strong culture of defining durable, community-driven standards. I genuinely wish we had a Foundation that excelled at promoting standards, but sadly: we don’t.

That said, this is an incredible opportunity to start changing that. A moment to build some real muscle around open standards.

So, please: don’t waste time and resources rushing out precompiles that no one’s sure how to use, when to use, or whether they’ll even survive runtime upgrades. First, define the flows and interfaces that are built to last. Once we have that clarity, then and only then should we start thinking about how those standards could map onto minimal Solidity-facing precompiles.

The last step would be to deliver an Ethereum DApp on Polkadot that showcases the success of these new standards and contracts (eg a fully fledged Staking DApp)

So, to summarize, here’s what I believe should happen in order of priority (as you asked :saluting_face:):

  1. Define a clear scope and set of realistic expectations. Understand both the limitations and possibilities of Ethereum-based DApps on Polkadot. Don’t overpromise what can’t be delivered.

  2. Build only the essential precompiles, those strictly needed to comply with established, widely adopted Ethereum standards (e.g., ERC-20, ERC-721) that AssetHub should support out of the gate.

  3. Prove it works. Deploy a real, standard Ethereum DApp on Westend AssetHub using those precompiles (Uniswap, Aave, SushiSwap, whatever). No shortcuts. No “hello world” demos. Show the thing actually running.

  4. If -and only if- that works, begin designing standards for other parts of the Polkadot ecosystem like staking and governance. But do so with a deep awareness that these areas are evolving, and any standards must leave room for growth.

  5. Craft minimal, durable precompiles based on those standards.

  6. Showcase success again: this time with a full-fledged DApp that exercises those new interfaces. A proper staking dashboard, for example, that actually works using just the Ethereum tooling and the new precompiles.

Honestly, I’d be shocked if we even make it to step 3 before the year is over. And frankly, even if we do, I’d be extremely cautious about pushing beyond that. Like it or not, the Ethereum tooling stack will never unlock the full potential of Polkadot, not without major compromises.

That said, I genuinely love what’s happening on AssetHub -especially the idea of having contracts available out of the box. I just wish those contracts could also power the real-world use cases we actually need… like the concrete example that kicked off this forum post.

I think that Ink! supports upgradable/versioned contracts, and from what @Daanvdplas has shared, Pop-API has been working on another approach. But if everything is coupled to minimalist Solidity precompiles, we’re screwed. Maybe I’m missing something-but until there’s a path for more sophisticated flows, we’ll keep bumping against the same limitations.

Bottom line: I love AssetHub’s momentum, but please let’s deliver contracts that aren’t just a comfort blanket for Ethereum devs, let’s build something that solves our real problems, too.

7 Likes

Hey @josep,

what’s that for? you want people to engage and listen to you?

This is not conductive to proper discussions. This is not the tone we want on this forum.

Pierre

2 Likes