Treating Polkadot/JAM as a cloud platform - node performance tiers

Looking at other cloud providers such as AWS, Azure, GCP, they all provide different nodes with different costs depending on tiers or parameters such as RAM/CPU/storage/network performance…

If a business would like to use Polkadot as part of their stack to decentralize part or all of their product they would have performance requirement.
I think that the current implementation where all validator nodes are treated the same is a bit juvenile… Obviously businesses will have strict requirements.

I think that also the way JAM is talked about as this one big computer is a problematic view that obfuscates that the underlying are many different nodes with many different performance levels.
Future implementation should allow service owners to choose node tiers and requirement and pay accordingly.

Obviously higher node requirements will reduce decentralization, as a smaller amount of nodes will be able to handle these requirement, so the decentralization level of the requirements should also be displayed to the service owner to decide on their own risk.

Further on that:

IIUC JAM is an attempt to generalize Polkadot and make it less opinionated. However, one of the things that are still opinionated are the validator nodes their nodes and their technical requirements, which currently only has a minimum threshold.

I often see the low requirements of nodes brought up as an advantage over other blockchains that are trying to “scale up” and their high node requirements cause centralization. However, this means that Polkadot is opinionated on node requirements.

Think about use-cases such as decentralized AI, obviously running an LLM model that would be somewhat competitive with centralized models would need very high performance. You could say that such a use case is not fitting because a centralized datacenter would far outperform a decentralized model. But you could see a case where a centralized datacenter runs the LLM model computations while only proofs as hashes are submitted into a decentralized blockchain. This would still require high performance and low latency.
By being opinionated about having unified node requirements you are actually limiting the possible viable use cases.

How can nodes prove their performance tier in a trustless way?
IMO this can have 2 stages:

  1. Prove that they can: this can be done by a “committee” testing the node every year/month/day/hour period by giving it a proof of work like test.
  2. Prove that they do: even if a node has the capability for certain performance it doesn’t mean that they will act on it. This can be countered though by the randomly chosen validators in core. Say a core of a “high tier performance” validator set underperforms, the entire core should be penalized and the core nodes should provide statistics about eachother. A node that regularly underperforms should be penalized by being slashed or being completely removed from the tier.