Polkadot Ecosystem Tests

TLDR

New tests to ensure XCM works.
Repo: GitHub - open-web3-stack/polkadot-ecosystem-tests: Polkadot Ecosystem Tests powered by Chopsticks

Background

Previous discussions:

Treasury bounty:

Overview

At Acala, we have developed e2e-tests to catch XCM issues. It has being successfully identified multiple XCM issues, and some before the runtime upgrade happens. The Core Fellowship has also requested such test setup to catch issues of new runtimes. So we have refactored and designed the new Polkadot Ecosystem Tests that aims to be suitable for the Core Fellowship as well as other parachain teams to use and contribute.

This aims to fulfill the Ecosystem Test Environment Bounty.

Goals

  • Covers common XCM use cases (XCM regression tests · Issue #3097 · paritytech/polkadot-sdk · GitHub will help, otherwise it will be a manual inspection)
  • Chain agnostic so it is easy to add new parachain
  • Easy to add new test
  • Easy to maintain test
  • Use latest mainnet data so is able to catching production issues
  • Able to verify new runtime is compatible before upgrade to it
  • Permissionless setup to allow the community and all parachain teams to be able to trigger test runs and subscribe for test failure

Current Status

Supports 12 networks, 8 network pairs, and 24 tests in total.
Github Action is used to run tests.
Periodicity rerun tests with latest blocks.

Next Step

Before we continue to develop new tests, I would like to request a review from the community for feedback and suggestions.

Here are some specific questions:

12 Likes

This looks really good, thanks Bryan!

I believe the most useful feedback will surface once integrations start.

A low-hanging fruit is for teams to migrate (or even duplicate) their existing tests to this common framework where things are tested “together”.

I’ll try to find someone to do this for the system-chains soon :+1:

1 Like

Very nice initiative - I will find time next week to add some XCM tests for our chains and their open XCM channels next week!

1 Like

Would love to see a test that ensures the correctness of all reserve-backed assets, if possible: “The amount of funds locked in the source chain being equal to the issuance of the wrapped equivalent token on the destination chain”.

1 Like

It is something can be done but maybe not part of this. I see it is more of an ongoing monitoring / consistency check thing, rather part of test suites. Besides, it is not something holds true 100% of the time (there are delays of XCM processing so the value will mismatch after a XCM is sent but not yet processed on dest chain) and that will make either the tests complicated or flaky. A proper monitoring solution can be configured to report error only if in the error state for more than few minutes and I don’t want to reimplement all those logic here.

1 Like

As a basic test this sounds reasonable? Shouldn’t this be very easy to setup? Like you send the XCM message and then ensure in a post check that the issuance is correct?

For sure this should be some constant monitoring as well, but as some sort of “smoke test” I think this makes sense.

The current tests already assert those:

  • the from chain source account balance is reduced
  • the dest chain to account balance is increased
  • there are UMP/DMP/HRMP event with some shape

You can manually inspect the snapshot files: polkadot-ecosystem-tests/packages/polkadot/src/__snapshots__/acala.assetHubPolkadot.test.ts.snap at 0ea04ff6674ef744d29cd8b694fc45b3daef7922 · open-web3-stack/polkadot-ecosystem-tests · GitHub

Note that we round numbers and redact some values (e.g. XCM topic ID) to avoid flaky tests due to onchain changes such as tx fee factor.

What’s missing is to assert that the reserve chain have corresponding tokens. This check is tricky because:

  • We modify storage directly to gain some token for testing so there will be some number mismatch. It is possible to offset those numbers though.
  • The tests are run against mainnet head data, which could have inconsistency due to reason I stated previously. i.e. if we run the test on a block numbers that someone is sending XCM transfer and it is processed on source chain but not yet reserve chain, then the number will mismatch and it will fail the test because of this.

Good point. Maybe we should also support running from a clean state?

I finally spent some time reviewing your work.

  • Do you think if it is easy to add a new chain? e.g. chain def for Acala 4
    There should be one chain that is using the most available features with comments on what is doing what. Otherwise looks okay.

  • Do you think if it is easy to add new tests? e.g. tests between Acala and Moonbeam 1

    Get rid off:

    afterAll(async () => {
      await acalaClient.teardown()
      await moonbeamClient.teardown()
      await polkadotClient.teardown()
    })
    

    Also:

    const restoreSnapshot = captureSnapshot(assetHubPolkadotClient, acalaClient, polkadotClient)
    
    beforeEach(restoreSnapshot)
    
    • Add an example showing how to send a custom XCM message that is using any prepared helper.
    • Add some multi hop example
    • Maybe one folder per test? Otherwise this one file maybe gets quite big over time
  • I designed a runner bot based on Github Action and Issues, take a look and see if it will work for you Command runner action · Issue #2 · open-web3-stack/polkadot-ecosystem-tests · GitHub 3
    Looks good to me.

    Would be good to show how to run a test with a custom WASM binary

  • I plan to use Github notifications to deliver test failure notifications. Let me know what do you think. Notification System · Issue #5 · open-web3-stack/polkadot-ecosystem-tests · GitHub 1
    Yes looks good to me.

Generally I think it is a good start and we can improve from there.

1 Like

It would also be nice to have an example on how to send transaction (that internally trigger XCM messages).

How would this integrate into the Fellowship? Do we have to fork it into there?
And how would new tests be added, to the upstream repo and then syncing the fork?

Before we continue to develop new tests, I would like to request a review from the community for feedback and suggestions.

I would just like to try it on the runtimes master branch, that should give some feedback on where to improve since it looks good in general.

The repo will move to the fellowship. It is just kept with the team right now, because that simplifies a lot in the early days.

1 Like

But what is a clean state? A previous known good state? But it wouldn’t catch problems introduced by state changes (e.g. if an asset metadata is changed).

It should be relatively easy to have a GH Action that: build wasm, clone this repo, run the tests using the new wasm.

Update: We have implemented runner bot to allow anyone to trigger a test run