ILE Labs: Substrate Testing Toolkit - $30K Request

Polkadot Grant Proposal


1. Executive Summary

Project name: Substrate Testing Toolkit

One sentence pitch: A developer-first testing framework that brings Hardhat-style rapid iteration to Substrate pallet development through local runtime simulation and integrated debugging.

Problem: Right now, testing Substrate pallets is fragmented and slow. Developers bounce between unit tests (no runtime context), Zombienet (takes minutes to spin up), and manual testnet deployments. There’s no middle ground—no fast, local way to test pallets with runtime behavior before committing to a full network simulation. This slows down iteration and makes onboarding new developers unnecessarily hard.

Solution: We’re building a lightweight testing harness that runs Substrate runtimes locally with deterministic block production. Think of it as Foundry’s anvil but for Substrate—instant startup, reproducible tests, full runtime context. Add WASM-level debugging (breakpoints, variable inspection) and CI/CD templates, and you’ve got a complete testing workflow that doesn’t exist today.

Impact: Parachain teams spend less time fighting tooling and more time shipping features. New developers can write and test their first pallet in hours instead of days. Runtime upgrades (like the Revive launch on Jan 20, 2026) get validated automatically through CI before hitting production.

Funding request: $30,000 over 14 weeks, milestone-based payments in DOT.


2. Problem Statement

2.1 Current State of Substrate Testing

Testing a Substrate pallet today looks like this:

Unit tests: You write standard Rust tests with #[test] macros. They’re fast but isolated—no runtime, no storage, no real execution context. Good for pure logic, useless for anything touching chain state.

Zombienet: Full network simulation. You write TOML configs, spin up multiple nodes, wait 3-5 minutes for setup, then run tests. It’s thorough but overkill for most development. Developers don’t want to simulate a network every time they change a line of code.

Manual testnet deployments: Deploy to Rococo or a custom testnet, call extrinsics manually, check logs. Slow, expensive, not reproducible. You’re burning time and testnet tokens just to verify basic functionality.

What’s missing: A fast, local runtime simulator that gives you real execution context without the network overhead. Something you can run 50 times an hour while developing. That’s the gap.

2.2 Evidence of Pain

From the Polkadot Forum (2023 thread, still unresolved):

“Many teams have needs for testing their work that often overlap… compiling a list of tools and frameworks”

Translation: People keep building one-off solutions because there’s no standard answer.

Polkadot Strategic Development Report (Jan 2025):

“Polkadot’s critical usability gap… compared to chains like Solana and Base, Polkadot should aggressively amplify its advantages”

Developers coming from Ethereum or Solana expect fast tooling. They get frustrated with Substrate’s learning curve and go elsewhere.

Real-world example: A parachain developer I talked to last month said their team avoids writing integration tests because Zombienet setup is “too annoying for quick iteration.” They test in production instead. That’s a tooling failure.

GitHub reality check: Look at Substrate repos—most example pallets have basic unit tests and nothing else. Not because developers are lazy, but because integration testing is too painful with current tools.

This results in slower development cycles, higher onboarding friction, and more bugs reaching production because comprehensive testing is impractical.


3. Proposed Solution

3.1 Core Toolkit Features

Feature 1: Local Runtime Simulator

What it does: Spins up a minimal Substrate runtime in-process, no network stack, no P2P, just execution. Mock block production runs deterministically (same inputs = same outputs every time).

Why it matters: You get from “cargo test” to seeing your pallet execute in a real runtime in under 2 seconds. No config files, no Docker, no waiting.

How developers use it:

#[test]
fn test_pallet_transfer() {
    let runtime = TestRuntime::new();
    runtime.execute_block(|block| {
        block.call(MyPallet::transfer(alice, bob, 100));
        assert_eq!(runtime.balance(bob), 100);
    });
}

Feature 2: WASM Runtime Debugger

What it does: Injects breakpoints into WASM bytecode, captures execution state (stack, storage, events), exposes it through Debug Adapter Protocol (DAP) for VS Code integration.

Why it matters: Right now, debugging Substrate runtime code means sprinkling log::info!() calls everywhere and rebuilding. With breakpoints, you see exactly what’s happening at each step—no guessing.

How developers use it: Set a breakpoint in VS Code, hit F5, step through your pallet logic while inspecting storage values and call stacks. Standard debugging experience.

Feature 3: Assertion Helpers & Test Fixtures

What it does: Pre-built macros for common checks—assert_event_emitted!(), assert_storage_value!(), assert_extrinsic_success!(). Template accounts and balances you can clone for tests.

Why it matters: Reduces boilerplate. Instead of manually checking runtime state, you express intent clearly.

Example:

assert_event_emitted!(MyPallet::Transfer { from: alice, to: bob, amount: 100 });

Feature 4: CI/CD Templates

What it does: GitHub Actions workflows for runtime upgrade validation. Pre-commit hooks for pallet linting. Ready-to-copy YAML configs.

Why it matters: Teams can automate testing without becoming CI experts. Catch breaking changes before merge.

How it works: Copy .github/workflows/substrate-test.yml into your repo, configure once, forget about it. Every PR runs your test suite automatically.

Feature 5: CLI Test Orchestration

What it does: A substrate-test binary that discovers tests, runs them with the local runtime, reports results.

Why it matters: Single command to validate everything: substrate-test run. No custom scripts, no make targets, just standard tooling.

3.2 Architecture Overview

Core engine: Rust crate (substrate-test-runtime) that wraps frame_system and provides a minimal executor. Imports your pallet, builds a test runtime, exposes a clean API.

CLI wrapper: Binary (substrate-test command) that handles discovery, parallel execution, reporting. Uses clap for args, tokio for async operations.

Debugger bridge: Separate crate (substrate-debug-adapter) implementing DAP protocol. Talks to the WASM executor, exposes breakpoint hooks, communicates with VS Code over WebSockets.

VS Code extension: TypeScript package that registers the debugger, provides UI for setting breakpoints, displays runtime state.

Repository structure:

substrate-testing-toolkit/
├── runtime/          # Core test runtime executor
├── cli/              # substrate-test binary
├── debug-adapter/    # DAP server for WASM debugging
├── vscode/           # VS Code extension
├── templates/        # CI/CD workflow templates
└── examples/         # 10+ example pallets with full test suites

Integration points:

  • Uses sp-runtime, frame-support, pallet-balances (standard Substrate crates)

  • Compatible with any FRAME-based pallet

  • Works alongside existing unit tests (additive, not replacement)

  • RPC-compatible for external tooling (optional feature)


4. Technical Design

4.1 Stack

Core:

  • Rust (nightly for WASM features, stable for everything else)

  • substrate-wasm-builder for runtime compilation

  • tokio for async runtime in CLI

  • wasmtime for WASM execution and instrumentation

CLI:

  • clap v4 for argument parsing

  • indicatif for progress bars

  • serde_json for test result serialization

Debugger:

  • tungstenite for WebSocket server (DAP protocol)

  • Custom WASM instrumentation (inject breakpoint opcodes)

  • serde for DAP message serialization

VS Code Extension:

  • TypeScript + VS Code Extension API

  • DAP client library

  • Standard debugging UI (no custom widgets needed)

Documentation:

  • mdbook for guides (same tooling as Rust book)

  • rustdoc for API reference

  • Markdown for templates and examples

4.2 Development Approach

Modular design: Each component (runtime, CLI, debugger) is a separate crate with its own tests. You can use the runtime library without the CLI, or the CLI without the debugger. No forced bundling.

Test-driven development: We write tests before implementation. Meta, but necessary—if our testing tool has bugs, that’s embarrassing. Target >85% coverage.

Open source licensing: Dual MIT/Apache 2.0 (same as Substrate). No CLA required for contributions. Public roadmap via GitHub Discussions.

Documentation strategy:

  • Quickstart gets you testing in <10 minutes

  • API reference covers every public function

  • Video walkthroughs for complex workflows (debugging, CI setup)

  • Example repo with 10+ pallets showing different patterns

Incremental delivery: Each milestone produces a working artifact you can demo. No “70% done but nothing works yet” situations.

4.3 Compatibility

With Substrate versions: We’ll support the latest stable Substrate release at launch. Pin to specific versions, test against them explicitly. When new Substrate versions ship, we update within 2 weeks.

With existing projects: Zero config for standard FRAME pallets. If your pallet compiles, it works with our toolkit. For custom runtimes, you might need to tweak the test runtime config—we provide examples.

With CI pipelines: GitHub Actions templates work out of the box. For GitLab/CircleCI/Jenkins, we document the equivalent setup. The CLI returns standard exit codes (0 = pass, 1 = fail), so any CI system can use it.

Future-proofing for runtime upgrades: The test runtime config is versioned. When breaking changes happen (like Revive), we release a new version of the toolkit with updated configs. Users upgrade at their own pace—old versions keep working.


5. Ecosystem Impact

5.1 Developer Experience Improvements

Faster iteration: Current workflow: Change code → rebuild runtime → deploy to testnet → call extrinsic → check logs. 5-10 minutes.

New workflow: Change code → substrate-test run. 5 seconds.

That’s a 60-100x speedup for the feedback loop. Developers can experiment more, try edge cases, refactor confidently.

Easier onboarding: New developers learning Substrate hit a wall at testing. Unit tests are too limited, Zombienet is overwhelming. Our toolkit sits in the middle—powerful enough to be useful, simple enough to learn in an afternoon.

Expected impact: Reduce time-to-first-working-pallet from 2-3 days to 4-6 hours (based on onboarding workshops we’ve run before).

Reduced testing complexity: No more maintaining custom test harnesses. No more one-off scripts. Standard toolkit means shared knowledge—when a developer joins a new parachain team, the testing setup is familiar.

5.2 Long-Term Ecosystem Benefits

Safer runtime upgrades: Every runtime upgrade is a risk. With automated CI testing, teams can validate upgrades against their existing test suites before deploying. Catch regressions early.

The Revive upgrade (Jan 20, 2026) is a perfect test case. Parachain teams will need to validate EVM compatibility changes—our CI templates will make that automatic.

More production-ready projects: Right now, projects skip integration testing because it’s too hard. With easy tooling, they’ll test more thoroughly. Fewer bugs reach mainnet. Users have better experiences.

Stronger tooling ecosystem: When core testing infrastructure exists, other tools can build on it. Think linters that use our runtime executor, monitoring tools that replay blocks for analysis, migration helpers that validate state transitions. Good foundations enable innovation.

5.3 Composability

With Chopsticks: Chopsticks lets you fork and replay mainnet state. Our toolkit focuses on local development. They’re complementary—use our tool for fast iteration during development, then validate with Chopsticks before deploying.

Potential integration: Our CLI could have a --fork flag that uses Chopsticks under the hood for integration testing against real state.

With Zombienet: Zombienet tests network-level behavior (consensus, XCM, multi-chain scenarios). Our toolkit tests individual pallets. Most teams need both.

Workflow: Develop with our toolkit (fast iteration), validate with Zombienet (full network simulation) before releases.

With Moonwall: Moonwall tests EVM smart contracts on Moonbeam. Our toolkit tests Substrate pallets. After Revive launches, developers building hybrid apps (EVM + native pallets) can use both tools in the same CI pipeline.

With existing Substrate tools: We’re not replacing anything. We use standard Substrate crates (frame-support, sp-runtime), so everything integrates naturally. Your existing code, configs, and workflows keep working.


6. Adoption Strategy

Documentation plan:

First 2 weeks post-launch, we publish:

  • Quickstart (10 minutes from install to first test)

  • API reference (every public function documented)

  • Migration guide (converting unit tests to integration tests)

  • Troubleshooting FAQ (common errors and fixes)

Hosted on GitHub Pages, searchable, versioned alongside releases.

Tutorials:

5 video walkthroughs (5-8 minutes each):

  1. “Installing and running your first test”

  2. “Debugging a pallet with breakpoints”

  3. “Setting up CI for runtime upgrades”

  4. “Testing XCM interactions locally”

  5. “Advanced: Custom test runtime configuration”

Videos go on YouTube, embedded in docs. Closed captions for accessibility.

Example projects:

We create 10 real-world example pallets:

  • Simple token transfer (onboarding friendly)

  • Governance module with voting

  • NFT pallet with minting/trading

  • DeFi primitive (staking or liquidity pool)

  • XCM integration example

  • Access control patterns

  • Event-driven workflows

  • Storage migration testing

  • Fee payment alternatives

  • Batch operations

Each has a full test suite showing different toolkit features. Developers can clone and adapt.

Workshops and demos:

We’ll run 2-3 live workshops in the first 6 months:

  • Polkadot Decoded (if timeline works)

  • Online session for Asian timezone developers

  • Recording published for async learners

Format: 90 minutes, hands-on coding, Q&A at the end.

Community engagement:

We’ll be active where Substrate developers already are:

  • Substrate StackExchange (answer testing questions, link to toolkit)

  • Polkadot Forum (dedicated thread for feedback and feature requests)

  • Element chat (quick support, bug reports)

Monthly “office hours” on Discord—open call where anyone can ask questions or demo problems.

Discovery mechanisms:

How developers find the toolkit:

  1. Polkadot Grants Program announcement (built-in visibility)

  2. Submission to substrate.io/developers page

  3. Integration into Substrate template repos (if maintainers approve)

  4. Blog post on Polkadot blog (we’ll pitch it)

  5. Developer newsletter mentions

  6. Word-of-mouth from early adopters (parachain teams we’ll contact directly)

Success indicators we’ll track:

  • Crate downloads (crates.io analytics)

  • GitHub stars and forks

  • Tutorial video views

  • StackExchange questions mentioning the toolkit

  • PRs from external contributors

Target: 50+ active users within 3 months, 200+ within 6 months.


7. Milestones and Deliverables

Milestone Duration Funding Deliverables Acceptance Criteria
M1: Test Runtime Core Weeks 1-3 $8,000 • Runtime executor crate
• Mock block production
• Basic assertion helpers
• 15+ unit tests
cargo test passes on fresh checkout
• Can execute simple extrinsic in test runtime
• Published to crates.io with >50% coverage
• README with quickstart example
M2: CLI & Local Simulation Weeks 4-6 $7,500 substrate-test CLI binary
• Test discovery and execution
• Deterministic block production
• 5 example pallets with tests
• Binary runs on macOS/Linux/Windows
• Executes 20+ tests in <10 seconds
• Exit codes work with CI systems
• Video demo showing full workflow
M3: WASM Debugger Weeks 7-10 $8,000 • WASM instrumentation for breakpoints
• DAP server implementation
• VS Code extension alpha
• Debug session demo
• VS Code can connect to DAP server
• Breakpoints hit correctly
• Variable inspection shows storage state
• Step-through execution works
• VSIX installs without errors
M4: CI/CD & Documentation Weeks 11-14 $6,500 • 5 GitHub Actions templates
• Pre-commit hook examples
• 50+ page guide (mdbook)
• 5 video tutorials
• 10 example projects
• CI templates run successfully in test repo
• All docs published and searchable
• Videos uploaded with captions
• Examples cover common patterns
• Beta workshop completed with >5 participants

Total: $30,000 over 14 weeks

Milestone payment structure:

  • Each milestone paid upon completion and curator approval

  • Deliverables submitted as GitHub releases with demo videos

  • Acceptance criteria verified by running provided test scripts

Risk buffer: We’ve built 10% time buffer into each milestone. If M1 takes 3.5 weeks instead of 3, we absorb it. Only request extension if something fundamentally changes in Substrate that breaks our approach.


8. Success Metrics

Adoption metrics (6 months post-launch):

Conservative targets:

  • 50+ crate downloads (crates.io analytics)

  • 30+ GitHub stars

  • 15+ projects using toolkit (discovered via dependencies)

  • 5+ external contributors (non-team PRs or issues)

  • Integration into 1+ official Substrate template

Stretch targets:

  • 200+ downloads

  • 100+ stars

  • 3+ parachain teams using in production CI

  • Featured in Polkadot newsletter or blog

Quality metrics:

  • <5 critical bugs reported in first month

  • 80% code coverage across all crates

  • <48 hour response time on GitHub issues

  • All documentation examples work on latest Substrate

Ecosystem impact indicators:

  • Questions on StackExchange mentioning the toolkit

  • Tutorials or blog posts from community members

  • Forks adapting the toolkit for specific use cases

  • Reduction in “how do I test my pallet?” forum posts

Measurement methods:

  • crates.io provides download counts

  • GitHub Insights tracks stars, forks, traffic

  • We’ll add optional anonymized telemetry (opt-in only, respects privacy)

  • Community surveys at 3 and 6 months asking about testing workflows

What we won’t measure:

We won’t track vanity metrics like Twitter mentions or general “awareness.” What matters is whether developers actually use the toolkit and find it helpful. Downloads and GitHub activity tell that story.


9. Team Background

ILE Labs builds blockchain developer tools. We’ve shipped production tooling for Arbitrum, built cross-chain infrastructure (VoxBridge), and contributed to multiple ecosystems. We know how to turn complex protocols into usable developer experiences.

Charles Emmanuel – Founder, lead on this project

  • 6 years Rust (built production systems, not just tutorials)

  • Prior work: Arbitrum CLI tooling (used by 100+ developers), OX Rollup utilities

  • Substrate experience: Built 2 custom pallets for internal projects, familiar with FRAME macros and runtime construction

  • Open source contributor: All our work is public, MIT/Apache licensed

  • GitHub: CodexEmmzy (Charles Emmanuel) · GitHub

Stephen Ifeadi – Core developer, WASM specialist

  • 4 years Rust systems programming

  • Deep WASM knowledge (built custom runtimes, debugged bytecode manually)

  • Performance optimization background (profiling, instrumentation)

  • Will handle: WASM debugger implementation, runtime executor optimization

Rotsimi Olashindé – Developer experience, documentation

  • 3 years TypeScript/Rust hybrid projects

  • VS Code extension development (published 2 extensions with 1K+ installs)

  • Technical writing (created onboarding docs for 3 blockchain projects)

  • Will handle: VS Code extension, tutorials, example projects, community support

Why we’re a good fit:

We’re not trying to become Substrate core contributors. We’re tool builders who understand developer pain because we’ve felt it. The testing gap is something we’ve complained about ourselves—now we’re fixing it.

We’ve done this before. The Arbitrum CLI work started the same way: noticed a gap, built a tool, shipped it, maintained it. Same process here.

We’re fast. Small team, no bureaucracy, daily standups, ship every week. 14 weeks is realistic because we don’t waste time.

Contact:

We respond within 24 hours. If curators have questions during review, we’ll clarify immediately.


10. Budget Breakdown

Category Cost Description
Core Development $21,000 500 hours engineering (Charles 200h, Stephen 180h, Rotsimi 120h) at blended $60/hr rate (30% below market for open source work)
QA & Testing $2,500 50 hours cross-platform testing, CI validation, security review
Documentation $2,000 Video production, mdbook setup, example project creation
Infrastructure $500 Domain registration, code signing certificates for binaries, CI runners if GitHub free tier insufficient
Community Engagement $1,000 Workshop materials, promotional graphics, community management time
Contingency $3,000 10% buffer for scope adjustments, unexpected Substrate changes, extended QA if needed
Total $30,000

How funds are used:

70% goes to core engineering. We’re building software, not running marketing campaigns.

The documentation budget covers video editing software, screen recording tools, and hosting costs. We’re not hiring professional videographers—team members will record and edit tutorials themselves.

Contingency exists because blockchain tooling is unpredictable. If Substrate ships a breaking change mid-project, we need budget to adapt. If everything goes smoothly, unused contingency stays unspent.

Why this budget is realistic:

Market rate for senior Rust developers: $80-120/hr. We’re charging $60/hr blended (some tasks are simpler than others). That’s a 25-40% discount because this is open source ecosystem work, not client services.

Compare to similar grants: Testing frameworks typically request $25-50K. We’re at the lower end while delivering more scope. How? Small team, no overhead, efficient execution.

Payment schedule:

Milestone-based as outlined in section 7. We invoice after curator approval, paid in DOT. Simple.


11. Sustainability Plan

What happens after the grant?

We maintain the toolkit for at least 18 months post-launch. That means:

  • Bug fixes within 48 hours for critical issues

  • Compatibility updates when Substrate releases major versions

  • Community support via GitHub and Element

  • Quarterly feature releases based on user feedback

Open source maintenance model:

The code is MIT/Apache licensed. Anyone can fork, modify, redistribute. We’ll accept PRs from external contributors—no CLA, just standard open source process.

Governance:

  • Feature requests discussed in GitHub Discussions

  • Major changes proposed as RFCs (request for comments)

  • Community votes on priorities (weighted by usage if we have telemetry data)

Community contributions we expect:

Once the core is solid, others will extend it:

  • Additional assertion helpers for specific pallets

  • Integration with niche CI systems

  • Language bindings (TypeScript test runner wrapping Rust core)

  • Custom runtime templates for specific use cases

We’ll mentor contributors. Good first issues tagged, PR reviews within 24-48 hours.

Long-term roadmap (unfunded, best effort):

Things we might build if the toolkit gets traction:

  • Performance profiling mode (track execution time per extrinsic)

  • Snapshot testing (record runtime state, replay tests against it)

  • Fuzz testing integration (generate random extrinsics, check for panics)

  • Visual test result dashboard (web UI showing test history)

These aren’t promises, just possibilities. We’ll prioritize based on what users actually need.

Potential future funding sources:

If major features are requested (e.g., “add support for specialized parachain X”), we might apply for follow-on grants. Or teams could sponsor development directly. We’re open to either.

For basic maintenance, no additional funding needed. We budget 2-4 hours/week for support and updates—that’s sustainable long-term.

Exit strategy:

If we can’t maintain it anymore (team moves on, priorities shift), we’ll:

  1. Announce handoff 3 months in advance

  2. Document everything thoroughly

  3. Find a new maintainer from the community

  4. Transfer ownership cleanly

We’ve seen too many abandoned open source projects. That won’t be this.


12. Risks and Mitigation

Risk 1: Substrate API instability

Substrate is still evolving. Major versions sometimes break APIs. If a breaking change ships mid-project, our toolkit might need significant rework.

Mitigation:

  • Pin to specific Substrate versions, test against them

  • Monitor Substrate GitHub for upcoming changes

  • Build modular architecture (if one component breaks, others still work)

  • Use our contingency budget for emergency fixes

  • Worst case: delay one milestone by 1-2 weeks to adapt

Probability: Medium (30%). Substrate is maturing but not frozen.

Risk 2: Low developer adoption

We build the toolkit, ship it, and… nobody uses it. Maybe developers don’t care about testing, or existing tools are “good enough,” or our UX is poor.

Mitigation:

  • Validate early with 5-10 parachain teams (informal user research before building)

  • Iterate on UX based on beta feedback (milestone 4 includes beta workshop)

  • Over-invest in documentation (remove all onboarding friction)

  • Direct outreach to teams we know need this (Moonbeam, Astar, others building active parachains)

  • Set conservative success metrics (50 users, not 500)

Probability: Low-medium (25%). Pain point is real, but adoption is never guaranteed.

Risk 3: Scope creep from community requests

Once we launch, users will request features. “Can you add X?” “Why doesn’t it support Y?” Saying yes to everything derails the roadmap.

Mitigation:

  • Clear scope definition upfront (this is a testing toolkit, not a full IDE)

  • Public roadmap showing what’s planned vs. out of scope

  • Feature request triage process (community votes on priorities)

  • Polite “no” to requests that don’t fit core mission

  • Point people to extension points where they can build their own features

Probability: High (60%). This happens with every successful open source project. It’s manageable.

Risk 4: WASM debugging proves harder than expected

WASM instrumentation for breakpoints is complex. If we hit fundamental limitations (can’t inject breakpoints cleanly, performance overhead is too high), milestone 3 might fail.

Mitigation:

  • Prototype WASM instrumentation in week 1 (before committing to full build)

  • If it’s unfeasible, pivot to a simpler debugging approach (logging-based)

  • VS Code integration is nice-to-have, not core value (CLI testing is the priority)

  • We can ship without perfect debugging and add it later

  • Contingency budget covers extended debugging work if needed

Probability: Low (20%). We’ve researched this, and the approach is sound. But WASM can surprise you.

Risk 5: Team capacity constraints

We’re a 3-person team. If someone gets sick or has an emergency, milestone timelines could slip.

Mitigation:

  • Cross-training (all team members understand all components)

  • Modular design (components can be developed independently)

  • Built-in time buffers in each milestone (3 weeks planned = 2.5 weeks actual work)

  • Transparent communication with curators (if delay looks likely, we’ll notify immediately)

Probability: Low (15%). We’ve worked together before and have backup plans.


Substrate is powerful but hard to learn. Testing is a massive part of that difficulty. Right now, developers either write shallow unit tests or set up heavyweight network simulations. There’s no middle ground.

We’re building that middle ground: fast, local, integrated testing that works the way developers expect. Run tests in seconds, debug with breakpoints, automate CI validation. Standard stuff in other ecosystems, missing in Substrate.

This isn’t speculative. The pain exists today. We’ve talked to parachain teams, read the forum threads, felt the frustration ourselves. The Polkadot Strategic Report says fix the “critical usability gap”—this is part of that fix.

Our team has shipped blockchain developer tools before. We know the domain, we know the tech stack, we know how to execute. 14 weeks, $30K, 4 milestones with clear deliverables. No handwaving, no vaporware.

Polkadot is betting on a “product year” in 2026. Products need developers. Developers need tools. We’re building one of those tools. Fund this, and you’ll see parachain teams shipping faster and onboarding easier by Q2 2026.

Let’s make Substrate testing not suck.


Contact for questions:

Repository: Will be created at github.com/ILE-Labs/substrate-testing-toolkit upon grant approval