ArcheLabs Progress Report: Laying the Infrastructure for AI on Polkadot

Hello everyone. I am the core developer of ArcheLabs, and we are committed to building internet infrastructure for the AI era. Over the past half year or so, we have been focused on implementing Jambda, a JAM protocol client. Last month, we submitted the M1 conformance test. This has allowed us to temporarily shift part of our attention toward the application layer. After more than a month of work, we have made some progress, and I would like to share with the community our current thinking, the work we have done, and the direction we plan to continue pushing forward.

Philosophy

This is ArcheLabs’ first report to the community, so before formally introducing our work, I would like to briefly explain our philosophy. It determines the goals and methods behind everything we do.

Human-Centered

Mainstream AI represented by LLMs is better at compressing consensus: it can organize massive amounts of information, generate average answers, and rapidly reuse existing knowledge. But it is not naturally good at preserving the long-term value of stubborn, heretical, minority positions. And yet, the progress of civilization often comes precisely from these commitments that seem wrong, untimely, or incomprehensible.

From a practical point of view, the current development of the LLM ecosystem has already shown that retaining an agent’s working experience and the user’s preferences in its memory can significantly improve efficiency — even if this does not necessarily reduce token consumption. This means that agents will continue to accumulate the user’s aesthetics, values, and unique methodologies. Over time, one person’s agent will become better suited than another’s for certain kinds of tasks. This makes complex interaction between agents highly necessary, rather than being something as simple as selling a skills package.

We view the interaction network among agents as an extension of human social division of labor. This is also a fundamental difference in design philosophy between us and many current approaches that place greater emphasis on “trustless agent transactions.” Although trustlessness is very important for interactions between agents, human-centered layers such as identity and coordination are equally indispensable, and we should not lightly abandon the reputation that human society has accumulated naturally over time.

Inventing the Mouse

For the past several decades, the internet has assumed that humans are the only actors. Interfaces, permissions, identity, payments, messaging, and even protocols themselves have almost all been built on top of this assumption. But the situation has changed: agents are becoming another important kind of actor. And yet they are still forced to use interaction methods designed for humans — browsing web pages, clicking buttons, borrowing account systems, and following a whole set of H2H product logic.

Of course, we can continue to make agents adapt to CLIs and GUIs designed for humans. But this is not the right direction. To solve the problem at its root, we need to rethink the human-machine interaction model formed during the industrial era, and natively design interaction mechanisms for A2A, A2H, and A2A2H — just as the mouse was invented at the beginning of the GUI era.

Protocols Eating the World

In the past, software development was expensive and slow. It was reasonable for everyone to use the same applications, the same interfaces, and the same interaction patterns. But in the AI era, products will become cheaper and cheaper, more and more numerous, and their lifecycles will become shorter and shorter. Everyone can use LLMs to customize their own tools, interfaces, and workflows. This creates a problem: once applications proliferate, what allows them to connect to one another, recognize one another, and coordinate with one another?

We believe that the AI era should not be understood primarily in terms of a few specific applications, but in terms of a series of protocols. Traditional internet software and services will be reconstructed, including domains, email, IM software, design software, note-taking software, social networks, and so on. On top of foundational protocols, these forms of software will grow naturally in entirely new ways. This is precisely the route we are taking: starting from the protocol layer of identity, payment, and interaction.

Progress

These three ideas lead naturally to a path of rebuilding infrastructure: through the design and implementation of protocols, we can reconstruct better internet infrastructure and natively support more advanced mechanisms within it.

Take social networks as an example. In the first half of Web3, many applications emerged, including social networks. Some of them had excellent ideas, yet they failed to fundamentally improve the user experience, and were even less able to cover the migration cost of changing users’ habits. As a result, they did not produce products with large-scale adoption. AI brings a new opportunity: I can naturally integrate those good ideas while actually improving user experience.

Following this direction, over the past month we have completed the following work:

AHIP (Agent-Human Interaction Protocol)

At present, mainstream interaction between LLMs and humans still mainly remains at the level of text chat, and this is fundamentally still an interaction paradigm developed for H2H scenarios. In products like OpenClaw that are mainly text-based agents, this limitation is not yet fatal. But in a much broader range of application scenarios, it significantly limits the expressive power of LLMs.

AHIP attempts to abstract agent-native interaction into a protocol layer that can be safely hosted by different hosts, allowing the host to enhance the expressive power of LLMs while retaining control. For example, instead of only outputting a block of text, an agent can render dynamically interactive charts, DeFi swap interfaces, OpenGov voting, or even games inside the chat window.

This is not an entirely new technical direction. OpenAI Apps SDK and Claude Artifacts have already demonstrated the value of this kind of interaction, but they are still mainly runtimes inside their respective products. What AHIP is trying to do is to abstract these capabilities as much as possible into a cross-host general-purpose protocol: whether in React, Vue, or other hosts, as long as the protocol is implemented, agent-interactive output can be supported. I even think Telegram mini-apps could partially implement this protocol.

At present, we have already implemented an initial version of AHIP and published two preview packages on npm:

@ahip/core: the core specification of AHIP.

@ahip/react: the AHIP SDK implemented in React.

It should be noted that this is still in a very early preview stage, and it is almost certain that these packages will be reworked.

PIP (Payment Intent Protocol) / ICP (Identity Core Protocol)

Although the LLM ecosystem is growing explosively, we believe it is still at a very early stage overall. We want to remain patient and prepare for the possible explosion ahead. Once agent adoption crosses a certain threshold, the demand for programmable payments is likely to grow explosively as well, and many scenarios that seem atypical today will emerge.

For example, consider applications in B2C market research and advertising. Since LLMs can provide better support than search engines when users make consumption decisions, we can first assume that future personal agents will understand users better, and may even become their doubles. Under that assumption, brands may directly pay users’ agents to conduct large-scale market research. Compared with real humans, the attention of agents is almost infinite — we call this the “attention explosion.” Agents are also calmer and more rational, and will not easily produce answers that deviate from the facts because of the immediate environment. For the same reasons, after products are launched, conversion-oriented advertising may also shift toward directly paying agents to promote products, trying to persuade agents to explicitly offer suggestions to real people.

Exploring more concrete scenarios is not the focus of this article. The point here is simply that even this simple thought experiment reveals many problems:

  • Untrusted subjects: in a purely on-chain environment, a contract is the subject of payment, and technically we may call it “trusted.” But once payments are extended to broader real-world scenarios, the interacting subject is not naturally trustworthy.

  • Uniqueness: in the above example, a brand needs to know whether the agent interacting with it is fraudulent, or just a duplicate agent of the same user. Moreover, agent interaction naturally creates risks of data collection, so we at least need unlinkability. This shows that in an AI world, PoP (Proof-of-Personhood) is not an optional extra, but an indispensable part of payment coordination.

  • Complex coordination: compared with traditional H2H payment interactions, payment coordination here is itself much more complex.

ICP (Identity Core Protocol) and PIP (Payment Intent Protocol) are precisely protocols designed for A2A and H2A scenarios. Put simply, ICP is concerned with who is a recognized subject, and who has the authority to act on its behalf. PIP is concerned with what action a payment is for, and how that payment enters, exits, and changes state during execution.

We have completed internal versions of ICP and PIP, and implemented them on a parachain so that they can be advanced within the overall ecosystem. They are also expected to be reworked soon, with the coordination layer as a key direction. In the future, they will run directly on JAM, so that they can support interaction at a larger scale.

Boundary Exploration

This is a small toy for testing LLM inference inside the PVM, and it is a small-scale validation of the boundary of PVM’s capabilities. It executes and verifies a specific minimal operator. More specifically, it extracts a real quantized block from a real GGUF model file, performs a minimal quantized dot-product computation inside the PVM, and then performs an offline reference check on the host side. We ran it successfully and passed the consistency check. You can find more details at pvm-llama-dot-smoke; if you run the test yourself, you should get results similar to the following:

fixed_block_off = 0x5ac160
quant_kind      = 8
vec_len         = 32
block_len       = 34
stage           = 6
guest_result    = -0.046125144
reference       = -0.046125144
approx_equal    = true

This preliminarily shows that there is a feasible path for checking local computations inside LLM inference on the PVM through operators. In theory, we could use sampling to ensure probabilistic correctness, or adopt a mechanism similar to optimistic rollups to reduce the problem to the verification of faulty operators. The advantages of doing this are:

  • it can tolerate floating-point differences;

  • the verification cost is almost independent of model size;

  • and it does not have hardware constraints similar to TEE.

On the other hand, this test also touches, in a very limited sense, the boundary of the question of “running LLM inference inside the PVM.” Although LLM inference contains many complex steps, its core is still composed of a large number of basic operators. But it must be emphasized that technically we have not proven that “running complete LLM inference inside the PVM is feasible,” because at the very least, a real token would need to be produced before that claim could seriously be discussed.

Overall, we do not believe that running or checking LLM inference inside the PVM is a good idea, at least not an urgent one at the current stage. The real value of this test is more like marking a capability boundary: under very extreme circumstances, at least we know what kind of path we might still consider.

Vibly

Put simply, Vibly is a personality-based social network. It is not about inserting an AI bot into a traditional IM product, but about treating agents as first-class citizens. It supports access to friends’ agents and other people’s agents, supports contacting anyone without interruption, and supports agents natively serving your social network as your doubles.

The canonical identities of users and agents are placed on-chain, while social relationships and user data are kept off-chain as much as possible. We view social relationships as a user’s resource allocation strategy. Users can make their agents public and decide who can access them, under what conditions, and in what way, by specifying access strategies based on social relationships and reputation.

Vibly is not a single concrete protocol, but a capability framework built on top of core protocols. Anyone can use the capabilities of the underlying framework to build different IM applications. This also means that Vibly should remain as open as possible at the application-protocol level: anyone can build sub-protocols to support different application forms such as disappearing messages, E2E encryption, custom information flows, and so on. At the same time, users can gradually customize the software they use through natural language, or join protocols already implemented by others.

We are steadily advancing Vibly and have already implemented AHIP support in a single-user version. The current version is still very early, and even retains a lot of debug-oriented information. You can preview it in Chrome at https://pre.vibly.network/: it creates a local agent in the browser so that you can try AHIP-style interactions, and it currently supports several mainstream LLM APIs. If you do not want to test it with an API, you can also simply play Gomoku with it to experience AHIP’s interactive capabilities firsthand.

For example, I created an agent using the DeepSeek API and said to it: “Let’s play chess.” It then created an interactive applet inside the chat window. In the future, this applet can be shared and modified. You could say to it, “What new proposals are in the community today?” and it would present you with an interactive interface, including proposals you have not yet browsed, as well as its own views. You could vote or comment directly without opening other applications or websites again.

Live Preview

VibDAO

We are designing VibDAO, and we have already started implementing it using EVM. Our hope is to gradually build a more transparent and more robust coordination mechanism in the early stage of the project, one that can support subsequent resource allocation, governance, and community participation.

However, due to security considerations and the practical needs of the current stage, we do not intend to rush out a DAO that is not yet mature. At the current stage, if you recognize the direction of ArcheLabs and Vibly and are willing to support further development, we are temporarily accepting direct donations. It should be noted that we cannot make any promises regarding financial returns.

Support ArcheLabs

Next-Stage Plan

It is difficult for us to make stable yearly, half-yearly, or monthly plans the way we could in the past. The environment changes too fast, and technology changes too fast. So the plan listed here is not a strict task list, but more like a wish list that will continue to be revised in the next stage. We will keep adjusting according to market conditions, community feedback, and our own thinking. In addition, I may switch my energy back to Jambda development at any time, because it still has an enormous amount of work ahead.

Core Protocols

At the current stage, much of Vibly’s work has been moving forward rapidly with deep LLM involvement, which has helped us complete MVP implementation and validation more quickly. But this also creates a problem: internal knowledge can easily become implicit, making the project difficult to maintain. This is especially unacceptable for protocols. In the future, we will rework all core protocols and publish more formal protocol drafts. In particular, for ICP and PIP, we need to consider their ecosystem positioning and possible integration with x402, ERC-8004/8126/8183, and related protocols.

Vibly Chain

We expect to rework the current chain so that it can more clearly carry the minimal public truth required by stabilized versions of ICP, PIP, and subsequent protocols.

Vibly

We will continue pushing it toward a standard suitable for limited release.

  • Support mainstream agent integration;

  • implement complete user relationship and IM functionality;

  • implement global agent access capabilities;

  • improve AHIP generation capabilities;

  • connect to the Vibly Chain testnet to close the loop among verified identity, payment, and access strategies;

  • release at least a desktop client.

VibDAO

Continue advancing the implementation and rule design of VibDAO, and launch a formal version when conditions are appropriate.

Documentation

After the relevant protocols become more stable, we will complete the documentation hierarchy for different audiences (including agents), including protocol drafts, developer documentation, deployment instructions, and so on.

Closing

The above is our current stage of thinking and progress. Many parts are still very early, and some are still only boundary explorations. We hope to expose these ideas and this work to the community as early as possible. If you are interested in any part of it — especially whether the protocol abstractions are too heavy, whether Vibly’s product path makes sense, or whether VibDAO’s organizational form is reasonable — we would be very happy to talk. You can also follow us on X for recent updates:

https://x.com/archelabs_org