We are already in talks about deploying Uniswap on the AH. More DeFi products could also follow through. Will update the forum in the coming days.
However, liquidity is the primary concern everyone has when deploying those contracts. We either have to invest heavily ourselves or run these infrastructures as a public good.
Hi everyone, Iâm Yakio from Subscan Explorer. Iâm really excited about the progress and updates shared here. I completely agree with the points raised about the need for robust tools to support both developers and users. Weâre very interested in this development.
With our experience in supporting EVM contracts, weâve provided services like contract transactions, read/write functionality, and contract verification for over 20 networks, including Moonbeam, Astar, and Darwinia.
Subscan also has a strong track record in enabling seamless interactions with AssetHub pallets and facilitating cross-chain contract calls between parachains and the relay chain, offering a comprehensive, one-stop service.
At the same time, we know that improving the user experience and integrating deeper EVM/PVM functionalities is essential. Weâre eager to work closely with Parity, stay in sync with development progress, and make sure we align with the PVM rollout to provide efficient and user-friendly tools for the community and developers.
Feel free to reach out to me on Element: @yakio:matrix.org
Thanks for raising this point! I think Subscan could also be a strong choice as a block explorer solution for AssetHub. We have extensive experience supporting both EVM and Substrate-based networks, and weâre already providing comprehensive services like contract verification, read/write interactions, and cross-chain functionality across more than 20 networks, including Moonbeam, Astar, and Darwinia.
Given that Subscan is already deeply integrated with both EVM and Substrate ecosystems, we could offer a highly customizable and cost-effective solution for Polkadot AssetHub, Kusama, and Westend as well. Our flexibility and experience in managing similar challenges make us well-suited for supporting the specific needs of AssetHub while keeping the user experience smooth and developer-friendly.
JAM implementers like us are definitely going to want to use this revive Solidity-to-PVM compiler for refine-accumulate (and transfer) code. Right now we are hand assembling code like this fibonacci refine-accumulate and using @koute 's assembler here to build test JAM âservicesâ as if its 1964.
We request a Revive Solidity-to-PVM compiler that would not only target pallet-revive for Polkadot developers but also target JAM services. An MVP would be ideally usable by JAM implementers in early 2025.
JAM services require host function calls, and thankfully revive already maps opcodes that interact with the runtime with a EVM compatible ecalli, managed by pallet-revive. A similar âhookâ within Solidity to access 2 dozen or so host functions in GPâs Appendix B (import, export, read, write, lookup, etc.) within reviveâs ecalli would be transformational.
There are early efforts for C/C++ already by JAM Brains from @OliverTY@francisco.aguirre et al â it would be natural to have a parallel one for Solidity from revive to match C and C++.
It would be bad to have pallet-revive appearing on PAH delayed, but if its only slightly more work to add JAM Services as a target, it will make a difference to Polkadotâs JAM future
My assembler isnât meant for actual end user use and pretty much only exists to make testing and generating test programs easier. If you donât want to assemble services like its 1964 you can just use⌠normal Rust.
Take a look at the guest programs in the PolkaVM repository. Those are all valid JAM programs, with the only difference being that they are packaged in a .polkavm container (and they donât have the hardcoded JAM dispatch table at the start, so theyâre not valid JAM toplevel services, but they can be executed in an inner JAM VM). You can trivially extract the code section from a .polkavm blob and run it on your own PVM implementation (currently this requires writing a tiny bit of Rust code to call ProgramParts::from_bytes and extract the code_and_jump_table field, but I suppose we could add a subcommand to the polkatool to make it possible to do it on the command line; although ideally everyone would standardize on a single program interchange format so that all of the tooling can be shared and things like debug info can be supported).
Calling JAM-specific hostcalls is also easy. Hereâs a snippet with a few JAM host functions defined (calling these from Rust will trigger the appropriate ecalli instruction):
Anyway, this is completely off topic here. If you want to talk about this either make a new topic, or create an issue in the PolkaVM repo, or message me on Element and I can help you out to get the toolchain going.
Another update since we are still not on Westend. It is basically almost done but we were hitting a few minor issues we are resolving.
REMIX
The compiler backend for REMIX was hanging when multiple requests came in. Needed to add some basic load balancing. Also some other minor issues with REMIX where when the backend was busy it was failing to initialize. That said, the backend is now working and the code can be found here. We are still cleaning up and fixing some minor issues in the frontend. Please note that the original REMIX doesnât have any backend since it runs the compiler inside the browser. This wasnât trivial in our case because our compiler is based on LLVM which was too heavy weight to run in the browser (because of 3GB memory limit). However, we plan to get rid of the backend as a next step and move the compiler into the browser. There are probably some optimisations to be had to make this possible.
For the frontend we currently maintain a fork so that it can talk to our backend. There not too many other changes apart from deactivating features we do not support yet.
Eth compatability
We were going back and forth with the design of how to package Eth transaction into extrinsics. The original approach wasnât working too well. We know more or less dump the Eth tranaction as-is in an unsigned extrinsic. Makes it easier for eth block explorers to decode it. This took a lot of time and iteration. We are planning to merge this into polkadot-sdk very soon (i.e this week) and then add pallet_revive to the Westend runtime.
pallet_revive
We realized that some features are needed to be implemented before we can launch a test net because almost every contract makes use of them. A lot of changes were made to pallet_revive since the last update. But those should be the last before we can launch a test net:
The latest one adds support for immutable variables in Solidity which is kind of tricky to support since we deploy code differently from EVM (we have on-chain constructors). This is the last change to the pallet before we think it can run enough contracts to be ready for a test net.
revive
@Cyrill was mainly busy implementing the above features in pallet_revive as he discovered that he needs them to implement all the EVM opcodes. We are still missing some auxiliary opcodes (like GASPRICE) and pre-compiles. But those can come after the testnet is deployed as not every contract needs them.
PolkaVM
We hired two new people to work on PolkaVM. The highest priority right now is to get 64bit support implemented as this is what we deem necessary for a Kusama deployment. Otherwise we would need to be backwards compatible to 32bit which we would like to avoid. @koute is mostly busy implementing/debugging advanced features for JAM (use space page faulting). He also had an idea on how to get rid of our custom toolchain which we plan to give to our new joiner as an onboarding task.
tl;dr
Our first mile stone is still to have a Westend deployment + REMIX instance for people to experiment and give feedback. It is as close as it gets to be done.
Parts of PolkaVM are/will be standardized in the JAM graypaper; the rest of the stuff which is PolkaVM-specific and not specced by the GP (the container format, the debug info format, etc.) I am planning to eventually spec out; I donât have a timeline as to when exactly that will happen.
Will it be possible in the upcoming Plaza to access the beefy MMR root to prove the state or extrinsics that are or have occurred on the Bridge Hub (or on any parachain)?
For example, will it be possible to prove the state of the Ethereum light client of Snowbridge from a smart contract deployed on Plaza?
In my opinion whatâs needed is just Forge support (ditching hardhat and remix) and of course that contracts of the complexity of uniswap V3 can be deployed (but no side deployment of uniswap is needed, unless it is official by the UNI governance and end-to-end deployed by Uniswap labs that would be very good but not needed from the start at all). For etherscan donât think is a good idea to pay that bill in the beginning of the chain, wonât make a substantial difference at such stage (probably subscan would be more suitable and you can always run a blockscout for a fraction of the cost with the same impact for full EVM experience).
Theoretically, you will be able to do heavy number crunching in your contract. But you probably want to implement the algorithm in Rust or C and then call that from Solidity. I am also not sure if it is viable in the first version where we will still rely on the interpreter.
Can you elaborate on the why? I think you are referring to the whole Foundry umbrella because Forge is just their testing framework.
I disagree for two reasons:
Its a hen egg problem: People will only come if you have etherscan while you will only pay for etherscan when there are people.
A block explorer needs to be centralized. It relies on contract verification to do any meaningful things. And this is unfortunately a really centralized process.
Yes, indeed Iâm referring to Foundry. The reasons are: (1) Experience. The developer experience that you get with forge, anvil, etc. is far superior than hardhat.
(2) Adoption. Itâs difficult to imagine a new project choosing hardhat for tooling. Donât see why to launch a new EVM chain only supporting outdated/or in its way to be outdated tooling.
I disagree with the assumption that âpeople will only come if you have etherscanâ. Definitely not a must for launching. What are the shortcomings of Subscan btw?
Donât think that developers will come because of etherscan. On the other hand, connecting with the Foundry point not supporting it will be a deterrent for devs.
Regarding users, theyâll come for the apps not for etherscan as the key value proposition for adoption. So, youâll need apps that provide some differentiator, not just deploying a Uniswap with unattractive pairs.
Do you have some data for that? Because obviously we want to focus our resources on what is needed most.
It has exactly zero verified contracts for Moonbeam. So I am not sure what is going on here but without contracts being verified the value of block explorer is fairly minimal.
Maybe a long shot, but it seems the foundry guy, @gakonst, is an ink! fan and has been thinking about RISC-V so maybe someone technical could reach out to him and see if itâs possible we could adapt foundry for our usecase.
Since forge doesnât really have a plugin system we would essentially need to fork and then upstream support for our custom solidity compiler (resolc). That is probably the easy part as long as we can make resolc behave similar enough to solc. But forgeâs its biggest feature is the tight integration with revm that allows running tests written in Solidity and powerful debug features. To support this we probably need to package pallet_revive in a way that it offers a similar API as revm. This is all possible. But we would need to find out if the author would be willing to merge our features.
That being sad, I would really like to settle the Forge vs HardHat debate. Do we need both? Because even if, as you said, no new projects would use HardHat we are mainly after existing projects to port over.
I think we need to gather some data. The Ethereum foundation data is quite useless because it relies on, in my opinion, badly designed survey.
If possible, I think we need to scrape github to get an overview over the landscape of tools that people are using. There are certainly files that belong to a certain build tool (forge vs. HardHat) to determine what the project is using.
Instead of scanning the whole GitHub, maybe you could sample the current versions / new versions of the top/relevant projects to check what tooling are they using and when was the repo last updated etc.
Hi there! Thank you for raising this pointâfeedback like yours really helps us improve.
While itâs true that Subscanâs contract verification feature started a bit later, it is now fully functional and already supports several networks. For example, we have successfully provided contract verification services for Darwinia, Humanode, and Krest, with hundreds of verified contracts across these chains. You can check out the verified contracts on our dashboard:
However, for networks like Moonbeam, which have already integrated contract verification on Etherscan, asking developers to verify the same contracts on multiple platforms can feel repetitive. We understand the concern and are actively exploring ways to synchronize verified contracts from other platforms to make things more efficient.
That said, we donât see this as a weakness of Subscan. For a relatively new network in terms of EVM functionality, we are on an equal footing when it comes to data availability. Plus, Subscan continues to enhance its features and offers better cost-efficiency compared to other explorers like Etherscan.
Thanks again for your feedback, and weâre excited to keep improving!
This is great news! Does this mean you will have JSON-RPC eth_getBlockByNumber eth_getTransactionReceipt eth_getTransactionByHash eth_getBalance (Native balance of DOT) eth_getCode (PVM byte code from revive) trace_block (or similar)
for us to check out? What about a Docker build with Solidity-to-PVM revive verifiability?