Polkadot Agent Mesh: Claude Code Skills for Polkadot & JAM

TL;DR

AI coding agents (Claude Code, Cursor, Codex, Copilot) write broken Polkadot code today — they recommend deprecated @polkadot/api, reference slot auctions, and generate XCM v3 patterns. Meanwhile, Solana already shipped an official dev skill that makes every AI agent on Earth build correctly on their chain.

We built the Polkadot equivalent: 12 validated, machine-readable documents covering the full Polkadot dev stack. Every API validated against live sources. 44 corrections applied. Zero hallucinated packages.

Repo: github.com/sardoru/polkadot-skills


The Problem

The way software gets built has changed. In 2026, the majority of new code is written with AI assistance. When a developer tells Claude Code or Cursor “build me a dApp on Polkadot,” the AI agent reaches for whatever knowledge it has — and that knowledge is wrong:

  • Recommends @polkadot/api instead of PAPI (polkadot-api) — the actively maintained, typed, tree-shakeable client SDK
  • References slot auctions and crowdloans — deprecated since Agile Coretime shipped
  • Generates XCM v3/v4 code — XCM v5 has been current since mid-2025 (PayFees over BuyExecution, InitiateTransfer over InitiateReserveWithdraw)
  • Uses substrate-node-template — instead of pop-cli for scaffolding
  • Hallucinates npm packages that don’t exist

Every broken build is a developer who gives up on Polkadot and moves to a chain where the AI gets it right on the first try.

This isn’t a documentation problem — Polkadot’s docs are good. It’s a format problem. AI agents need opinionated, structured, machine-parseable instructions. Not marketing pages. Not “here are 5 options, pick one.” They need: use this SDK, follow this pattern, here’s the security checklist.


What Is It

SKILL.md — A Complete Agent-Native Developer Guide

A single entry point (SKILL.md) that routes AI agents to 12 specialized sub-documents:

Document Covers
SKILL.md Stack decisions, operating procedure, task classification
papi-client.md PAPI SDK — createClient, getTypedApi, Smoldot light client, Observable watchValue, signers
substrate-pallets.md FRAME pallet dev — #[frame::pallet], VersionedMigration, benchmarking with #[benchmarks]
ink-contracts.md ink! smart contracts — v5 stable, v6/PolkaVM transition, pop-cli
xcm.md XCM v5 — PayFees, InitiateTransfer, SetHints, Locations, asset filters
coretime.md Agile Coretime — Broker pallet, partition, on-demand vs bulk, sale lifecycle
opengov.md All 15 governance tracks, dynamic approval/support curves, conviction voting
testing.md Chopsticks (fork-based), Zombienet (multi-node), try-runtime
security.md Substrate + ink! + XCM security checklist, v5 Empowered Origins
frontend-framework-kit.md React/Next.js + PAPI + Smoldot patterns
polkadotjs-compat.md Legacy @polkadot/api boundary pattern + migration checklist
resources.md Curated links, RPCs, repos, learning path

Opinionated Stack Decisions

These are non-negotiable. The whole point is that AI agents don’t get a menu — they get clear instructions:

Layer Choice Not This
Client SDK PAPI (polkadot-api) @polkadot/api
Smart contracts ink! + pop-cli Solidity / Hardhat
Runtime dev FRAME (#[frame::pallet]) Raw Substrate primitives
XCM v5 only v3/v4
Testing Chopsticks + Zombienet --dev node only
Governance OpenGov (dynamic curves) Gov1
Blockspace Agile Coretime (Broker pallet) Slot auctions
Frontend React/Next.js + PAPI + Smoldot polkadot.js + centralized RPC

Validation Methodology

Six parallel research agents cross-referenced every document against live sources:

  • papi.how — PAPI SDK documentation
  • use.ink — ink! contract docs
  • polkadot-sdk repo on GitHub — actual Substrate/FRAME source
  • wiki.polkadot.network — Polkadot Wiki
  • npm registry — package name and version verification

44 corrections applied across 10 files. Common hallucinations caught and fixed include: watchValue returning an Observable (not accepting a callback), Broker.partition (not Broker.split), @acala-network/chopsticks (not a made-up scope), paseo-local (not rococo-local), and all 6 treasury spend limits corrected in the OpenGov doc.

Agent-Friendly Discovery

The repo follows the llms.txt standard and provides multiple ingestion paths:

File Purpose
llms.txt Lightweight doc index — fetch this first, pull individual docs as needed
llms-full.txt All 12 docs concatenated into a single file (2,300+ lines)
.well-known/agent.json Structured JSON manifest with topic-based routing
CLAUDE.md Project-level instructions with stack rules and hallucination traps

Any AI agent can ingest the full Polkadot dev stack in one fetch.


Why This Matters Now

1. The Competitive Landscape Has Moved

Solana shipped their official dev skill through the Solana Foundation. It’s already registered on skill directories and integrated into AI coding tools. When a developer says “build me a token on Solana,” the AI gets it right. When they say “build me a token on Polkadot,” the AI fumbles.

This is a developer acquisition problem with a compounding effect. Every month without an agent-native guide is a month where AI tools are training developers to build elsewhere.

2. JAM Makes This More Urgent, Not Less

With JAM stabilizing on testnet and mainnet targeted for later this year, there will be an entirely new execution model for AI agents to learn. Services, work packages, PVM — none of these concepts exist in AI training data yet. If we don’t define the canonical patterns now, AI agents will hallucinate JAM APIs the same way they currently hallucinate XCM v3 patterns.

The SKILL.md framework is designed to be extended for JAM. When JAM-specific SDKs stabilize, adding a jam-services.md sub-document slots in naturally alongside the existing Substrate/FRAME docs.

3. Agent Spaces — Polkadot’s Structural Advantage

Beyond the developer guide, the repo includes a design proposal for Agent Spaces — dedicated coretime-powered blockspaces for agent-to-agent economies.

Ethereum is pursuing ERC-8004 (on-chain agent identity/reputation), but they’re constrained to a single execution environment. Every agent plays by the same global rules on one congested chain.

Polkadot already has what Ethereum is trying to retrofit:

Capability Ethereum Polkadot
Isolated execution domains L2 fragmentation Parachains / Coretime
Custom rules per domain One EVM Per-parachain runtime
Native cross-domain messaging Bridges XCM
Flexible compute allocation No Agile Coretime
On-chain governance Off-chain OpenGov
Proof-of-Personhood No Active research (JAM aligned)

The Agent Spaces proposal has three tiers:

  1. Agent Registry Chain (system parachain) — PRC-8004: agent identity, reputation, validation registries with Proof-of-Personhood integration
  2. Agent Spaces (coretime-powered) — Any DAO or community spins up a dedicated agent execution environment with custom rules (DeFi agents get circuit breakers, governance agents require identity verification, supply chain agents need KYB)
  3. Agent XCM Protocol — Extended XCM for agent-to-agent task delegation, payment routing, and reputation queries across spaces

Every Agent Space is coretime demand. Every coretime purchase funds the treasury. It’s a flywheel.


What We’re Asking For

This introductory post is to share the work, gather feedback, and start a conversation. A formal treasury proposal will follow based on community input [if the community wants to lead this forward – i’m just trying to help].

Proposed phasing:

Phase Scope Track Estimated Cost
1 SKILL.md review, deployment to official Polkadot docs, 3-month maintenance Small Spender ~$10,000
2 Agent Spaces spec (PRC-8004) + reference Agent Registry pallet Medium Spender ~$40,000
3 Production deployment, 3 reference Agent Spaces, security audit, JAM integration Big Spender (Bounty) ~$100,000

Phase 1 is deliberately small — ship the dev skill, prove the value, then scale.


How You Can Help Right Now

  1. Review the docs. Clone the repo, read through the sub-documents, file issues if you find incorrect API calls, patterns you’d never use in production, or missing security considerations: GitHub - sardoru/polkadot-skills · GitHub

  2. Test with your AI agent. Drop SKILL.md into your Claude Code, Cursor, or Copilot context. Ask it to build something on Polkadot. Report whether it generates correct code.

  3. Contribute parachain-specific guidance. Want AI agents to build correctly on your parachain? PRs for chain-specific sub-documents welcome (e.g., astar-contracts.md, moonbeam-evm.md).

  4. Give feedback below. What’s missing? What’s wrong? What would make this more useful for the ecosystem?


3 Likes

Not sure I understand the pricing. I (like everyone I guess) build skills daily, wouldn’t have imagined it would be that expensive, specially when it doesn’t even include an evaluation system.

It is a timely proposal and aligned with the times we are living in. These are the kinds of details we need to pay attention to if we want new developers (or perhaps even non-developers) to start exploring Polkadot and building here. I’ve always said it: we need to make things easier, not more complicated. We need to move forward with these kinds of implementations that will allow more experimentation within the ecosystem.

A lot of information about Polkadot out there is outdated, and as long as that remains the case, it will be very difficult to change the perception that Web3 users have of Polkadot.

If every time someone searches for “how to build on Polkadot” they find information with deprecated APIs or references to slot auctions that no longer exist, the damage to the reputation is silent but constant.

This could easily be funded under the supervision of Parity. I hope you have great success.

2 Likes