Decentralised Futures Program: MatrixAI - A decentralized AI computing power protocol

Background & Motivation

The global computing power scale is maintaining a high-speed and stable growth trend. The rapid rise of fields such as artificial intelligence, scientific research, and the metaverse poses higher demands on computing power. It is estimated that by 2030, the average annual growth rate of the global computing power scale will reach 65%.

However, we have to confront some issues.

  1. Expensive training costs: Ordinary computing devices might take a considerable amount of time to complete training tasks, hence the necessity for utilizing a cluster of computing resources. The cost of training the ChatGPT model in a single instance is approximately 4 million USD, with daily hardware expenses reaching around 700,000 USD. These steep prices are not favorable for startups and those sensitive to costs.

  2. Underutilized computing facilities: Data from the internet reveals that the global utilization rate of computing resources is around 20% to 30%. High-end servers are often distributed across various regions, and the idle computing resources are not being fully utilized.

  3. Vertical integration oligopolies monopolize the market: Indeed, highly centralized resources and data monopolies give monopolistic companies significant control over data and pricing power for computing power. This can create barriers to entry for smaller players and limit competition in the market.

(Further reading: The Market and Opportunities for AI Computing Infrastructure in Web3)

What is MatrixAI?

We have taken the first step by establishing a decentralized AI computing power marketplace called MatrixAI, which aims to aggregate idle AI computing resources from around the world. The vision of MatrixAI is to attract computing power suppliers globally to participate in the network through fair and transparent incentive mechanisms, thereby establishing a vast pool of idle computing resources. We aim to build MatrixAI as an AI computing resource layer network in the Web3 era, providing support for both small-scale AI computing services and high-performance computing clusters to meet diverse demands.

MatrixAI is committed to breaking the current centralized monopoly, bringing innovation and progress to AI applications across various industries, and bringing greater openness and sustainability to AI computing power services, driving the entire industry to new heights. We believe that through the efforts of MatrixAI, computing power suppliers worldwide will be able to fully unleash their potential, while computing power consumers will have access to more flexible, efficient, and cost-effective AI computing solutions. We look forward to partnering with you to create a future full of potential in the decentralized computing power domain.

How do we build?

The MatrixAI protocol’s Polkadot-related construction components are as follows:

  • Utilizing Substrate as the blockchain framework to swiftly implement the MatrixAI network.

  • Utilizing Polkadot-js/apps as the block explorer and achieving the implementation of custom functionalities.

  • Using Polkadot-js/api as the tool for interaction between MatrixAI clients and the blockchain.

  • Enhancing the security and stability of the MatrixAI network through parachains/parathreads/coretime.

System architecture

The system architecture of MatrixAI is shown in the diagram above, and the entities involved are introduced as follows:

  • User: Users with training model requirements.

  • Trainer: Any user with idle computing resources can join the MatrixAI Network as a trainer without any barriers. Trainers act as consensus nodes in the network and earn block rewards by contributing valid computing power. Valid computing power can be accumulated through the following two methods:

    • Drilling: Completing measurable computing tasks(Model inference) assigned by the network. Trainers can process such tasks in a streamlined manner upon joining the network. The computing tasks are published by projects collaborating with the MatrixAI Network and typically hold practical value. These tasks fall within the domain of machine learning and can be used to estimate the trainer’s actual computing power. Additionally, due to their state independence, the computing tasks can be easily divided and verified, making them suitable for machines with various hardware conditions.

    • Training: Selling computing resources by placing orders on the computing power marketplace, entering into agreements with users, and completing the expected model training. Trainers who join the network can choose to place orders on the computing power marketplace at any time. Before initiating an order and determining the price, they can refer to the actual conditions in the computing power market. When users select a training machine, they can browse the reported key hardware configuration information of the machine, as well as its historical valid computing power values as a reference. Once the order is finalized, the trainer downloads the required data from the location specified by the user and completes the model training as requested.