ERC-20 like Standard for Polkadot

While I agree with @rphmeier about what he wrote in Meta: Convention Creation over Standardization, I think fungible tokens are a well enough understood domain to define a standard for (at least try to and for some parts of it).

General Considerations

The following first talks a little What is important to consider for a standard in a multi-chain environment.

Firstly, there exist 3 perspectives in this world,

  • Intra-Chain: Components of the same chain interacting with each other
  • Inter-Chain: XCM in our world here
  • Exter-Chain: External world interacting with a chain
    where the Inter- and Exter-Chain perspective can be the same sometimes. Accepting this, it makes the most sense to define standards for each perspective separately but not independently of each other.

Secondly, a standard in this world MUST NEVER rely on the structure of the state.

Ethereum standards like ERC-20 & ERC-721 are so successful because they define interfaces and leave it up to the implementor how to structure the state of its contract. As long as your contract adheres to the interface, it can contain additional functionality and arbitrary state, while still being compatible with it.

In Substrate this is currently not the case. In Substrate, pallets are the defacto implicit definition of conventions. Taking pallet-balances as an example, all external integrations with a chain rely on the storage of TotalIssuance to not change its location in order to query the issuance of the native token of a chain. I would argue that at this point it is almost impossible to refactor pallet_balances::AccountData or pallet_balances::TotalIssuance.

Having the above in mind,

Standardizing Exter-APIs

Substrate already provides the “interface-like” nature of Ethereum contracts for querying the state of a chain in the form of runtime APIs.These APIs are especially useful for off-chain computations or light-client integrations.

  • Retrieving information MUST be based on runtime-APIs
  • Retrieving information MUST NOT define/rely on specific storage locations or structures

Standardizing Inter-APIs

If we agree on developing standards not based on pallets but rather on interfaces, standardizing internal APIs is harder. While the ecosystem already uses common traits like fungibles::* it would be extremely useful for tokens to have a standard for submitting extrinsics for transferring tokens. Pallet-Balances is the defacto standard for native tokens, Orml-Tokens the standard for other tokens.

In order to standardize an API from an extrinsic perspective it is needed that

  • The Pallet has the same “invariant index” across chains in the chains runtime call enum
    E.g.: OrmlTokens: orml_tokens::{Pallet, Storage, Event<T>, Config<T>} = 77
  • The Pallet’s calls have the same signature and order

With this, it would be possible to submit the same scale-encoded bytes for a call (Omitting differences in the signing bytes) on different chains. Would be great if we could standardize retrieving the SignedExtra via a runtime-API (I know metadata contains it already but it is quite harder to deserialize than just getting a blob back to also sign).

Standardizing Intra-APIs

The Intra-Chain perspective is not part of this proposal as runtime-APIs currently are not callable via XCM and XCM already provides a detailed token transferring standard that allows for compatibility.

Proposed Standard

  • Common runtime-API

    decl_runtime_apis! {
    	pub trait Tokens
    		fn symbols() -> Vec<BoundedVec<u8, 32>>;
            fn name(symbol: BoundedVec<u8, 32>) -> BoundedVec<u8, 128>;
            fn decimals(symbol: BoundedVec<u8, 32>) -> u32;
            fn total_issuance(symbol: BoundedVec<u8, 32>) -> u128;
            fn balance_of(symbol: BoundedVec<u8, 32>, who: [u8; 32]) -> u128
  • Common pallet at index x in the Call enum.

    trait Transfer<AccountId> {
        type Balance;
        type CurrencyId;
        fn do_transfer(who: AccountId, currency: Self::CurrencyId, amount: Self::Balance) -> DispatchResult;
    trait Config: frame_system::Config {
    		type Balance: TryFrom<u128>;
    		type CurrencyId: TryFrom<BoundedVec<u8, 32>>;
    		type HandleTransfer: Transfer<AccountId, Balance = Balance, CurrencyId = CurrencyId>;
    impl<T: Config> Pallet<T> {
       fn transfer(origin: OriginFor<T>, currency_id: BoundedVec<u8, 32>, amount: u128) -> DispatchResult {
          let who = ensure_signed(origin)?;
          let amount = amount.try_into().ok_or(Error::<T>::BalanceConversionFailed)?;
          let currency_id = currency_id.try_into().ok_or(Error::<T>::CurrencyIdConversionFailed)?;
          Pallet::<T>::do_transfer(who, amount, currency_id)

The currencies will be provided by their symbol, as those abstract over the specific CurrencyId enums of the respective chains.

I left out the allowance part of ERC-20 intentionally. Although it would be really beneficial to have, I guess it is also quite a big security risk.

Anyways, I hope we can discuss a bit here if this makes sense and I am happy to get some feedback on the idea.


While I’m not that technical, I’d love to see more standardisation in the eco

cc: @shawntabrizi I’m sure you’d like to comment on this

As noted here, we should standardize external callers into the Polkadot ecosystem via XCM messages, which can be processed locally and return data. Thus, the ERC-20 standard should not live at a pallet or runtime-api level, but at the XCM level.

Common runtime-API

Common pallet at index x in the Call enum.

Not needed if we use XCM.

As for “intra-chain” standardization, I don’t we need or should force anything here. The internals of a blockchain (and any smart contract) should entirely be a black box. Since a blockchain or contract entirely understands how its internals work, we should give them maximum freedom to make decisions at that level.

As for the suggested API, pretty hard to disagree with anything as this is just the existing ERC-20 standard, however I think it is a miss not to include some kind of reference to locked tokens in the API, since this quite standard in the world of staking. Perhaps, as simple as just exposing a spendable_balance api, then balance - spendable_balance is some opaquely locked amount of tokens the user owns, but does not have free access to.

XCM is a message format. While it does have some querying & subscription capability, I am not sure if we actually want to use it as the part of the querying & subscription standard.

And I would like to bring up Wasm view functions once more as a better solution to create unified way to access runtime data

1 Like

There are certainly a set of XCM instructions which only make sense between multiple parties and over XCMP, however, as you said, XCM is just a message format.

It should be perfectly okay for a message which says:

  • “Return to me the list of available tokens on this chain.”
  • “For token X, what is name as a string, total issuance, …?”
  • etc…

Some of these queries may explicitly not be allowed via XCMP since it might be just human metadata, and waste a lot of bandwidth, but there is no reason that a generic XCM format cannot support these kinds of messages.

Perhaps trying to wrap this into XCM is just confusing, and there should be some other query language abstraction, but the key thing i want to note is that XCM sits at the right level for this kind of thing to me.

For example, as mentioned above, we should not be implementing these things in the runtime API or even the Wasm level, as these already make heavy assumptions about which kinds of things we can query. There is no reason that we can try to push forward a standard querying/messaging system here which is used in all ecosystems, even those which do not build on Substrate, and do not use Wasm.

Imagine that Solidity smart contracts could easily have an XCM query adapter on top of ERC-20 tokens, which understand the XCM queries, and return the expected result. We need to only agree about some abstract and generic message format which can be interpreted and implemented correctly, which is why I look to XCM, which is basically already trying to do this, but focusing so far on interactions between two consensus systems.

@shawntabrizi How to envision such a standard to look in XCM? I am referring to the fact of actually submitting XCMs via an extrinsic.

A few remarks I think are critical here:

  • XCM logic is already heavy
  • A single entry-point for all standards is cumbersome to handle
  • Must be possible to develop independently of Parity
  • Must not based on transact – i.e. arbitrary bytes passing possibility

Regarding those two comments.

I disagree here. XCM should be capable of “triggering”/“carrying” such standards to other chains, but should not be the level where it is implemented or defined. Why:

  • XCM is rather a restricted instruction set than a message format → High unnecessary overhead going through the executor
  • Implementeing an XCM executor on Ethereum is a huge overhead for something that already is a standard – ERC-20
    • The executor does not have burning/minting rights on contracts
    • Each bridge needs a separate handler or a separate executor
    • Doing ‘BridgeMsg(XCM(ERC-20))’ vs. BridgeMsg(ERC-20) is just overhead
  • Complexity
    E.g. the current TransferAsset instruction already has quite some layers, differentiating here between local and external transfers, adding filters, and losing the sense of, does this transfer come from an extrinsic or an internal token movement as the executor works on traits.


  • Have you already roughly estimated what would be the complexity of an executor on ethereum?
    I recall that Snowfork did not implement a scale-codec due to the additional computational costs, hence I assume that an executor will be quite costly.

I don’t see why this adds anything:

  • In the end this will always result in logic that is implemented in the runtime. Runtime-APIs already enable this.
  • If the wasm-blob is the application of a chain, runtime-APIs are exposed capabilities of this application
  • Each chain will need a “handle_xcm_msg” API, and XCM has a bunch of functionality that will be rejected here.
    • And what would be the return type of this API?

  • Could you elaborate more on what you mean with those?
  • Are those functions pallet-based?

As @kianenigma pointed out, the balance of a user will in the future not be solely based on what is currently associated with the balance in the AccountData but rather what balances can be associated with the given account. In the end, I think this needs to be handled by the runtime, as it is the only place that is aware of all pallets that are used.

Unless we enforce each pallet to implement something like: balance_of(who).


One of the goals of the wasm view functions is that we can define a wasm interface (similar to runtime API) and clients can invoke this on a wasm blob to query information. Then we can define a common interface as part of the token standards and having every parachain shipping a conforming wasm blob and then we can build a generalized wallet without any chain specific code.



  • what would be the difference to runtime-APIs?
  • would those be implanted in the runtime or in pallets? Both possible?

It is another wasm blob and not part of the runtime.
This means:

  • It can be shipped/upgraded independently of onchian runtime
  • It can be implemented by other teams without approval from the chain team

I thought about this in the last day a bit and wondered what you think about the following approach:

  • The extrinsic of the chains (that want to enable “interface”-like calls) will be changed like follows:
    pub trait Derivation<Call> {
        const MAX_ARG_LENGTH: u32;
        fn derivative(
            location: Multilocation, 
            arguments: BoundedVec<u8, ConstU32<Self::MAX_ARG_LENGTH>>
        ) -> Result<Call, TransactionValidityError>;
    pub enum Function<Call, Max: Get<u32>> {
        Interface {
            location: Multilocation,
            arguments: BoundedVec<u8, Max>
    /// A extrinsic right from the external world. This is unchecked and so
    /// can contain a signature.
    #[derive(PartialEq, Eq, Clone)]
    pub struct UncheckedExtrinsic<Address, Call, Signature, Extra, Derivator>
            Extra: SignedExtension,
            Derivator: Derivation<Call>,
        /// The signature, address, number of extrinsics have come before from
        /// the same signer and an era describing the longevity of this transaction,
        /// if this is a signed extrinsic.
        pub signature: Option<(Address, Signature, Extra)>,
        /// The function that should be called.
        pub function: Function<Call, ConstU32<Derivator::MAX_ARG_LENGTH>>,
        _phantom: PhandomData<Derivator>,
    impl<Address, AccountId, Call, Signature, Extra, Lookup, Derivator> Checkable<Lookup>
    for UncheckedExtrinsic<Address, Call, Signature, Extra, Derivator>
            Address: Member + MaybeDisplay,
            Call: Encode + Member,
            Signature: Member + traits::Verify,
            <Signature as traits::Verify>::Signer: IdentifyAccount<AccountId = AccountId>,
            Extra: SignedExtension<AccountId = AccountId>,
            AccountId: Member + MaybeDisplay,
            Lookup: traits::Lookup<Source = Address, Target = AccountId>,
            Derivator: Derivation<Call>,
        type Checked = CheckedExtrinsic<AccountId, Call, Extra>;
        fn check(self, lookup: &Lookup) -> Result<Self::Checked, TransactionValidityError> {
            Ok(match self.signature {
                Some((signed, signature, extra)) => {
                    let signed = lookup.lookup(signed)?;
                    let raw_payload = SignedPayload::new(self.function, extra)?;
                    if !raw_payload.using_encoded(|payload| signature.verify(payload, &signed)) {
                        return Err(InvalidTransaction::BadProof.into())
                    let (function, extra, _) = raw_payload.deconstruct();
                    let call = match function {
                        Function::Call(call) => call,
                        Function::Interface {location, arguments} => Derivator::derivative(location, arguments)?,
                    CheckedExtrinsic { signed: Some((signed, extra)), function: call }
                None => {
                    let call = match self.function {
                        Function::Call(call) => call,
                        Function::Interface {location, arguments} => Derivator::derivative(location, arguments)?,
                    CheckedExtrinsic { signed: None, function: call }
  • The runtimes expose “interface” specific runtime-APIs (and wasm view functions)

Each “standard” must just agree on how to define a Multilocation and associated arguments.
E.g. for ERC-20

  • Multilocation:
    Multilocation {
       parents: 0,
       interior: Junctions::X2(
             // Defining the standard. 20 for ERC-20
            // Defining the "Address" or symbol of the respective token on this chain
  • Arguments: recv: [u8;32], amount u128

I am not sure, if Multilocation is the right type here, but feels like it is usable and also could increase compatibility to create XCMs from the “interface” calls.

1 Like

@shawntabrizi any reply to the above or is the case for xcm already closed, then I would close this here.

Nothing is closed here for sure.

What is the best way that I or other developers / the broader Polkadot community can help you here?

Do you need a platform to have this discussion? For example a Twitter Space or Substrate Seminar?

1 Like

What is the best way that I or other developers / the broader Polkadot community can help you here?

  • Carrying on the discussion here would be great
    Still interested in hearing some counter arguments to the stuff I said above

  • Bringing together the relevant actors would be great (Parity, parachain-teams that are interested, wallet providers)
    This only makes sense if there is a will from the parties to agree on a compromise. Which format makes sense here, whom to include, and what preparation is needed, I really don’t know

@shawntabrizi any updates?

One more thing to consider regarding the standardization.
The Transact is based on the scale coding of the call enum of the receiving chain. For general purpose bridges (e.g. BridgeHub) this is slightly problematic, as the chains would

  • Need a way to tell BridgeHub to which call to forward messages too (if not all have the receiving call at the same location in the enum and not use the same arguments)

Having “Interface” like calls on the enum Call level would eliviate this problem. The approach above does not work there, as Transact works below the extrinsic.

Kinda talking to myself here. But am still thinking about this and to summarize here a little:

Important for a Solution

  • XCM compatablity
  • Focused on external users – Wallets, exchanges, (light-)clients
  • Chain specific enum Call independent – Omit need to get/parse metadata of a chain

Possible Solution - Take 3

  • Extend construct_runtime!

     pub enum Runtime where
           Block = Block,
           NodeBlock = cfg_primitives::Block,
           UncheckedExtrinsic = UncheckedExtrinsic
           // Optional field. If used, then the Call variant number 255, MUST 
           // NOT be used by a real pallet, but will be populated 
           // with the interface.
           Interface = Interface
  • Define a trait Interface that satisfies the needs for the global pub enum Call

    • Among others impl UnfilteredDispatchable, GetCallMetadata, GetDispatchInfo


  • Allows to be called via extrinsics
  • Allows to be called via Transact
  • Chains can decide which “protocol” to support
  • Chains can decide to implement their own protocol
  • External parties only need to submit extrinsics to this call variant

I brought up a proposal to the repo. See Interface like access to runtimes · Issue #13362 · paritytech/substrate · GitHub.

re: XCM, I think that the approach of “Dialects” through Xcm::Transact could be a viable approach to interfaces: XCM Dialects: plugging into XCM to create standardized and predictable API surfaces

Runtime APIs aren’t trivial to version or engage with via RPC. Regardless of the approach, I think it would be nice to have crates published which can define interfaces that are plug & play with construct_runtime!.

My main approach here is not to standardize the API between chains – in the first place at least – but rather the API for third parties integrating with chains.

Although, having the two being the same would be a great advantage.

@rphmeier could you take a look at the idea I drafted in issue noted above? Would be great to get your feedback there.

re: Regarding the runtime API. I currently do not see any other way to standardize the querying of a chains state. like view-functions in Ethereum. And at least from my perspective runtime APIs are great for that, as light-clients interact well with them. To the best of my knowledge.

I suppose I am missing some context on why this must be embedded in the runtime per se - @xlc’s approach of view functions allows 3rd-party integrations to be built on top of the runtime, so you have a diagram like

[runtime] <- view functions / storage reads <- interface implementation <- interface 

where only the interface is standardized, but everything behind it can vary from chain to chain or release to release.

Storage reads are less than ideal, but even view functions or Call-construction functions don’t need to be standardized themselves as long as there is some glue between them and the interface itself. The interface implementation would live in JS or Rust, for instance.

Coalescing around view functions / call-construction functions is OK too, if minimizing the stack depth is desirable. Although I wouldn’t want to see that leak into the actual UnsignedTransaction type; call construction should happen outside of the runtime IMO.

The latest conversation in the GitHub issue seems aligned with that, so seems ok to me.

but even view functions or Call -construction functions don’t need to be standardized themselves as long as there is some glue between them and the interface itself.

They mustn’t that is right. But it helps a lot if they are. From my knowledge and observation – arguably inferior one – of what makes EVM smart contracts so successfull, a big part is that every smart contract can implement a given interface and every integration with this interface can be re-used & that it is on-chain. Being it third-party, on-chain, cross-chain.

where only the interface is standardized, but everything behind it can vary from chain to chain or release to release.

I get the point, but at least currently, this would mean every third party must agree on using the same implementation if this should be useful. Already now, each third party can integrate with any chain, but due to the difference of pallets and call enums this is overly complex, time-consuming and costly.

And, for example I am pretty sure, that something like Fireblocks will not rely on any third party doing call-creation or abstraction work via a JS/Rust library but rather will want to provide allowlisting for specific calls, just like for contracts on the Ethereum side.

TBH, I have no strong feelings about HOW this is done, all that matters to me is that this makes it as easy and stable as possible to interact with any chain without having dedicated knowledge about how the chain works.
I do think it is really important that this “interface” is implemented on the runtime level in order to be secure, that a call to a given interface returns the wanted result. But am open to be convinced from the opposite.

1 Like