On the impossibility of on-chain plutocracy governance

Since its inception, Polkadot has been proudly advertising its on-chain governance system, promoting itself as a good solution to the old problems we were facing, for example, in Ethereum governance. Gradually, we discovered problems. In October 2023, I started the Web4 Initiative to promote discussions of those problems. The community was enthusiastic but did not treat the problem as very serious. After all, things seemed to work… until recently, where we started to have hot debates on certain governance issues such as grifting, with regards to, for example, the marketing bounty, etc.

During the years, the Web4 initiative went on-and-off, but our study, from a game theoretic aspect for an on-chain plutocracy governance system (that Polkadot itself is) did not tell us a good story. In brief, we believe that there are at least two fundamental assumptions that Polkadot has got wrong regarding on-chain governance:

  1. That coin voters automatically have the best interest of Polkadot as a whole.
  2. That it’s in coin voters best interest to participate in on-chain governance, if they hold coins.

In a plutocracy system, it’s in the majority holders’ interests to gradually squeeze out minority holders – collusion. In an on-chain governance system, such power is unchecked, and the consequence of collusion is rather minimum. For example, in Polkadot treasury, the worst scenario, if such collusion is found out, will at most only be that such individual won’t get treasury funding in the future. This is in contrast to a real-world plutocracy system (i.e. company shareholders) where further fines and jail sentences can be threatened. In most countries, laws also require that management must act with the best interest of the company. Otherwise, voting rights can be stripped and the management can be discharged.

In essence, an external check-and-balance enforcement system must be present to counteract the above untrue assumptions. Such system must be outside of the Polkadot governance. This does not leave many possibilities:

  • For a weaker enforcement system, we can do what the Cardano’s on-chain governance system did, who introduced a “Cardano constitution”.
  • For a stronger enforcement system, Polkadot can probably itself voluntarily register as a “security”, and thus be subject to company shareholding and investment laws.

If nothing is done, due to simple game theory, we think that such activities like grifting and collusion will only become more and more common on Polkadot.

Polkadot has provided us a valuable data point in the landscape of blockchain governance system. An on-chain governance system, such as Polkadot’s, is not necessarily better than an off-chain goverance system, such as Ethereum’s. If in doubt, the latter probably gives (minority) investors more peace of mind because they won’t need to worry about being gradually squeezed out.

For past discussions, see:

5 Likes

You say it’s in the interest of large holders to squeeze out minority holders and then suggest solutions without offering evidence. With Polkadot being your example – can you offer any numbers or on-chain stats that would show this is the case? How would you attribute a given holder’s behavior to this rather than simple under performance when compared to peers?

What I have witnessed since the launch of OpenGov is nothing more complex than the waggle dance. People group based on shared culture, beliefs, ideas, personality and vision. Otherism emerges and the waggle dance victor is whichever sub-group gets the most people to dance their waggle dance. The losing sub-groups retreat, regroup, rethink, acquiesce, or split off to form their own colony.

Honestly, I would put some level of blame on you, people in the tg group we were both in, and many more for the current situation. The fundamental difference between polkadot and the projects you mention were the initial token distribution. Many individuals were able to achieve an oversized position. Instead of leading the on-chain charge, building sub-daos around interests / hobbies to incubate projects, ideas, community, culture – we didn’t really see any parity or w3f on-chain until after “the great firing”. Then, even after, many left entirely, many others cried and whined. You guys should have been using your staking rewards to build the ecosystem. I can’t help but feel like many of you took short term gain over long term profit which is definitely not the mentality I’d like to see in the ecosystem.

It doesn’t have to be this way if you fund sub-daos as incubators that can act in corporate form offering different cultures, compatibility, and options to new comers. You had the opportunity to lead a whole new “generation” of polkadot new comers and influence culture in a big way. Instead you cried and rage quit.

In the immortal words of “Sec. Not Sure” – “You either lead, follow, or get out of the way.”

With the people in that tg group we could have done anything. All I can recall at this point is attempting to give you guys motivation and wipe tears in futility.

2 Likes

I don’t know why you’re suddenly playing the blame game, but anyway, I do want to post the disclaimer here: I don’t hold any stake in DOT for a long time, which in retrospect has been a good financial decision. I agree that “the great firing” has been a warning sign, which I also took note of. I did work for Parity in the past, but not any more. I disagree with many of the decisions by Parity/W3F leadership, for example, their EVM strategy, and I have tried to warn them for years, but in vain. I do agree with you that many of the decisions like this probably already made Polkadot past the point of no return.

By the way, no one "rage quit"ed. It is Parity leadership who decided to fire me after I publicly disagreed with their EVM strategy (which turned out I was right – just 2 months later they pivoted to revm).

My interest in this post is purely in research of on-chain governance system.

Since you are not being nice and after many of the treasury controversies, I do think it has been established that no one should be obliged to work on Polkadot for free. If you are really interested in on-chain stats, please ask someone to submit a treasury proposal first.

1 Like

Let me re-frame in a less hostile way. Everyone I have asked about you has said 2 things. 1) You’re really smart and 2) You have a lot of opinions that you feel strongly about. What I don’t understand is why you left the ecosystem. What is stopping you from recruiting a few pba devs, starting a dao, implementing whatever thing you’re interested in at the time, and putting it forward to opengov as maybe a test on kusama or to differentiate or more production on polkadot? Get people behind your vision in your discord or tg or whatever and start doing your own waggle dance.

The part I don’t understand is, after all the parity stuff, there was still the opportunity (and still exists) to start a revolution. What I feel like you rage quit is the ecosystem.

With plutocracy, if we assume that are your argument is true there are multiple potential, cascading, and compounding causes for the current state of things. Governance is only possible with a shared culture, set of values, sense of purpose, and vision. So here we are from all corners of the planet from different places with different values attempting to determine what to fund for the betterment of something that we don’t even really quite know what it is because it’s changing constantly. We really have no shared ideals / vision things come down to a fight between parties on what that vision is. Whichever party makes their case to the most people, with the most stake, wins, period. That’s what I called the waggle dance above. This is just otherism / tribal mentality and spending time with people. Because we have no vcs, incubators, capital, etc outside of a few bounties, w3f grants and anything parity might still be operating – we’re all forced to go through opengov for funding. This means that whichever group has the most token holders doing their waggle dance wins. Everyone else feels shunned and sees it as coordination / grifting / yada.

It’s not actually that, it’s just humans doing human things. What we need to do is remove the humans from the equation. We can do that by siloing human activity into sub-daos.

If we instead provide funding directly to sub-daos we can silo all this behavior and allow the dao management to do what they see as their daos best interests. It allows the sub daos to compete on hard metrics and get funded based on those metrics. Each gets the freedom to have their own unique style, ability, culture, vision, output, etc. The part where opengov went really wrong in my opinion was that we never properly utilized collectives, bounties, societies and whatever other ideas people came up with. We went from one whimsical oligarch to the next and now everyone will be doing the W3F waggle dance.

Because of differences in values, personality, etc, it’s inevitable that different sub-daos will form specially with the incentives from the DV and DN. We can take this a step forward and incentivize behaviors that achieve the results we want, whatever those may be. Bringing in users, developing apps that generate txns and revenue, etc. Ideally, over time, the sub-daos after self-selecting for the most optimal, become the incubators, drivers, vision, and culture of the ecosystem.

At the end of the day token holders don’t care who gets funded for what. They care about results. If you create and refine incentives for that, you will get that. What do token holders want?

  • Real Transactions
  • Real Users
  • Real Apps
  • Real Usage
  • GROWTH
  • IMPACT

Maybe a very simple way to say it is, this is what humans always do. It sucks when you’re not “in the in-group”, and if we want multiple cultures to exist simultaneously we need to specifically design for that otherwise, regardless of the system, there will always be a common shared culture that develops and there will always be marginalized groups.

RE the report / data – How much do you want and whats your address? I’ll put it in for you. Also, if you started a DV dao there’s no way they wouldn’t give you a delegation and the revenue along with that. So – maybe get something bootstrapped and come back home.

2 Likes

As you seem to be genuinely interested, I would like to provide my perspective in two parts.

The reason is simple – because Polkadot has already lost its innovation attractiveness for developers like us, and there’s always an opportunity cost to spend time in an ecosystem.

JAM is of course still quite interesting, which I’ll still follow, but otherwise, all other current roadmaps of Polkadot from Parity/W3F leadership lack anything sufficiently innovative to get involved deeply.

  • If I’m interested in Polkadot’s consensus algorithm, then I’d rather spend more time looking at its upstream, Cardano.
  • If I’m interested in Proof of Personhood, then I would rather spend more time looking at already established system. I probably have to mention Worldcoin here, but in any case, you’ll also need some way to “prime” your initial “trusted” set on Polkadot before the tattoo system or proof-of-video-interaction, which may nonetheless have to involve biometrics or government IDs (this can also be done anonymously).
  • If I’m interested in EVM, then I would better spend time building on Ethereum, not getting something second-hand on Polkadot.
  • If I’m interested in compiled EVM, then I would probably follow more on Monad and a number of other chains competing in this field, not Polkadot.
  • If I’m interested in “products”, as seems to be the recent Parity’s pivot, well then we didn’t have a product tradition and almost any of the top-20 coins will continuously do better than Polkadot.

For the other thing that Parity built – Substrate. Cardano/Midnight and Bittensor have showed us clearly that we don’t need to be on Polkadot to use that.

There’s an opportunity cost to spend time and to build in an ecosystem. If an ecosystem does not provide sufficient innovation to get developers interested (which is the case for Polkadot), then it must be sufficiently big. Otherwise, the only other way is to pour in a lot of money to pay developers to build for you. However, with how the treasury is managed and my analysis of the game theoretic nature of Polkadot’s on-chain governance (as it’s the core thesis of this post), there’s currently no point in doing that.

Polkadot could have had its edge, had it deployed some actor-based or pure Rust smart contract development platform. However, I wouldn’t even consider touching its current non-pure Revive smart contracts. As an EVM implementor I know all the nasty things in EVM, and combining it into Revive as its current form is just disasters waiting to happen.

What you said and what I said does not conflict. As long as all parties have the best interest of Polkadot when voting, no matter what their belief of Polkadot’s “best interest” is, the system will work. However, my thesis is parties do not have the best interest of Polkadot in mind. Everyone is only primarily interested in enriching themselves, with Polkadot’s best interest as a side effect. Sometimes, parties can vote in Polkadot’s best interest, when this suits them, for example, in those uncontested on-chain upgrades, or chain-wise inflation reduction. But often, parties’ self-interest conflicts with Polkadot’s best interest, for example, in those collusion and corruption you see in treasury voting. A sufficiently effective punishment system, like you see in the real-world, which involves additional fines and jail sentences, will counteract this, because in this case it would not be one’s self-interest, if it’s not of company’s interest, with the consideration of punishment. If, however, like in Polkadot’s case, that the punishment is only that “a person will never gets treasury funding in the future”, then my thesis will be true.

4 Likes

It does conflict on a very fundamental level

You are assuming that

  • Voters are rational (or that voters are able to suspend bias and emotion)
  • Voters even understand or know what it is we’re trying to build
  • Voters can even comprehend a proposal or its implications (lacking context, experience, knowledge, etc)
  • That all voters are able see at least on some level, a common vision for polkadot. What is in the best interest of polkadot is entirely dependent on what you see the vision of polkadot is. ( “A Conflict of Visions” )

If your underlying assumptions about the situation are wrong you’ll draw the wrong conclusions. Humans are much better at and more prone to rationalization than being rational. It is far more likely for someone to decide how they will vote and then rationalize the why of their vote rather than going into the situation analytically. Then, with the time sink for all of these proposals people are prone to look to others for their votes. Then one person’s bias creeps through everyone who listen’s to that voice.

On top of this – over time people build up bias’ against individuals or groups based on associations and then boom, you’re waggle dancing before you know it. There are several examples of this, like Rich where many voting parties will shoot down ideas just based on his association regardless of the context, idea, etc being presented.

Humans are able to carry out their daily lives because of assumptions. Assumptions make the world go round. You assume based on repetition that because something was a way before, it will be that way in the future. This builds up bias and you can easily see in the voting records over time in cases like Rich. He does have some good ideas from time to time, I will openly admit it and vote for them when I think they’re worthy of implementation.

The reality is, all of this doesn’t matter at all, because we don’t really care about it at all. We care about the outcomes. If you do not explicitly design any given system to allow for multiple cultures and voting is involved – You will 100% of the time create 1 or more “in-groups”, 1 or more “out-groups”, animosity, and in-fighting. This is not necessarily a bad thing, humans evolved like this, this is how we operate and make decisions collectively in the real world.

The reality is, we don’t care about that at all, we only care about outcomes.

What you are describing here is a stick. What I am suggesting is that you can use all of the sticks you want – but just because you stop a behavior that you don’t want – doesn’t mean that you’ll get the behavior that you do want.

The alternate of the stick is the carrot. The carrot now is opengov itself which causes a PvP battle for pieces of a carrot. If you incentivize the behavior you want, you will get more of that behavior. If you silo human activity into sub-daos, have low thresholds to start a sub-dao, and performance based incentives for a variety of wanted outcomes, you create a path of least resistance. Crowds are like water and if you give them a path to follow they will follow it. By creating a path of least resistance you’re already disincentivizing the behavior you don’t want simply by making the carrot easier to get in another way.

By creating sub-daos you silo the individual cultures and each culture can go about and do whatever they want in whatever manner they want and be judged solely on the outcomes of their efforts. This turns opengov from PvP to PvE (player versus enemy) – The goal becomes, value creation (building things that achieve desired outcomes), not rent seeking (creating a reputation, getting a proposal through and doing forever maintenance proposals).

Because the responsibility of operational efficiency now falls to the dao. Waste, fraud, abuse, now hurt the dao – members of the dao will treat the resources they have, and will receive, in a different way than they would those funds received from opengov. They are incentivized to remove grifters, under performers and non-achievers from their own ranks.

What I am saying is, it does not matter what form of government you implement, where you implement – the following things are and will be true:

  • At least 1 major and anti culture will rise
  • There will be accusations, insults, and various behavior that could be defined as deviant
  • All parties will claim their way is the best way, right way, and that the ‘other’ is wrong and bad
  • All parties will claim they are the moral superior
  • etc

The ONLY way that I have personally been able to find that you can deal with this, avoid it, end it and never see it again is if funding goes out to sub-daos that organize around ideas, values, culture, build things, and then receive value based on the metrics they hit.

It is only when all parties can be guaranteed of a level playing field that we won’t have whiners. If you really want everyone to be analytical and rational, then let’s stick to the metrics that token holders determined we care about.

(I call this idea the Decentralized Communities btw – been pushing it for awhile and all of this is the why I’m pushing it)

1 Like

Again, I think what you said and what I said does not conflict. I actually agree that subdaos will probably make certain things slightly better. If different groups have different opinions on how they’d want to develop Polkadot, then it’ll make disagreement less and things more likely to pass.

The untrue assumption is

And this also applies even to subdaos. We will see the same level of collusion and corruption that we have seen as in the current treasury system.

Collusion and corruption have the following characteristics:

  • Corruption is to directly provide financial interests for others to vote for you. Collusion is a slightly more subtle form (I scratch your back, you scratch my back). In both cases, voters are not voting on Polkadot’s best interest, or in your case, Polkadot subdao’s best interest. The only party that benefits is the voter itself, through the financial gains from collusion and corruption.
  • They are easy to hide during voting, and are usually only discovered after the facts through external audits.

In subdao’s scenario, if we apply the thesis, it means that voters in a subdao will have the incentive to vote Aye to every possible proposals. There will be no disagreements. Parties will have incentives to hide any misbehaviours by other parties. A subdao’s treasury will be depleted more quickly.

This has probably already been happening in the bounty system and also in Fellowship. Lack of external accountability will always be the issue. We’ll not see marketing bounty voluntarily reduce its payout amount because the marketing metrics in a quarter don’t look good, and we’ll never see fellowship reducing its bonus if a project is delayed on roadmap. Technically, those existing subdaos have external accountability on the upper Polkadot governance, but because those subdaos also hold controlling stakes there, the external accountability is effectively void.

This will become even more damaging with the second assumption we think is untrue:

Because voters can vote by feet. They don’t need to vote. They can just leave. There may be good parties who occasionally point out a subdao’s corruption issues. However, it’s not in their interest to further actively engage to fix it, but to move on to other coins.

2 Likes

Would it be fair to reduce your argument to “there must be off-chain governance”?

The conclusion from me is only that the premises of on-chain plutocracy governance is wrong:

  • Coin voters don’t automatically vote based on the best interest of Polkadot. Majority holders have incentives to squeeze out minority holders. After-the-fact audits cannot be properly punished.
  • It’s not in coin voters’ best interest to participate in on-chain governance, even if they hold significant coins. They can always vote by feet. Be vigilant, and be ready to immediately “give up” Polkadot, is better than locking coins to participate in voting.

I unfortunately can’t provide any solutions to you. Maybe there must be off-chain governance. Maybe you can do some on-chain constitution system coupled with slashing to discourage corruption. Maybe some other solution is possible.

But one thing is clear – that Polkadot’s current on-chain governance system does not work from a game theoretic perspective. The incentives for coin voters are just incorrect.

I would of course be interested in what solutions you’ll come up with.

2 Likes

The base of your argument is that voters should be rational. The basis of PoS is that nominators would be rational. Do you think it’s rational for a nominator to nominate a 100% commission validator they don’t own? If it’s not rational to lose all your staking rewards to a 100pct validator – why are so many nominators doing exactly that?

Can you show me anywhere in society or history where that (humans being rational) has ever been the case?

Question: Can you comb through science journals and look for research involving humans and rationality. I want to know if it is scientifically established that humans are more prone to rationalization or being rational.

Short answer — the scientific consensus is: **humans are neither reliably purely rational nor reliably pure rationalizers;** they are *boundedly rational* and *context-dependent*, but there is strong, replicated evidence that people routinely engage in motivated reasoning and confirmation bias (i.e., rationalization) unless conditions push them toward reflective, analytic thinking. ([PMC][1])

Key findings (evidence-backed, high level)

1. Humans show systematic, robust biases that look like rationalization.
   Large bodies of empirical work label predictable errors (confirmation bias, motivated reasoning, belief polarization, selective evidence-search) that make people distort incoming information to fit prior beliefs or goals. These effects are reproducible across domains (politics, health, technology). ([ScienceDirect][2])

2. Dual-process architecture explains *how* this happens.
   Cognitive science describes fast, intuitive processes (System 1) that are prone to heuristics and biases, and slower, analytic processes (System 2) that can correct them — but System 2 is costly and not always engaged. As a result, intuitive rationalization is common; deliberative rationality is possible but effortful and context-sensitive. ([PMC][1])

3. Motivational and identity factors powerfully drive rationalization.
   When beliefs are tied to identity, values, or incentives, people selectively accept or reject evidence in ways that protect their goals. Neurocognitive and behavioral studies show motivated reasoning is measurable in brain activity and decision patterns. ([PMC][3])

4. People *can* be made more rational, but debiasing is uneven.
   Interventions (awareness training, accountability, incentives, structured analytic techniques) sometimes reduce biased processing, but effects are often partial, task-dependent, and short-lived. Blanket claims that “education fixes it” are not supported uniformly. ([Frontiers][4])

5. Practical framing: “more prone to rationalization” is accurate *on average and in many real-world settings*, but not deterministically so.
   In low-stakes, low-incentive or time-pressure contexts people default to heuristics and rationalizing narratives. In high-stakes, trained, or accountable contexts (expert decision-making, science, some markets) deliberative rationality increases. ([PMC][1])

What this means for your question (actionable conclusion)

* If you ask “are humans *more* prone to rationalization or rationality?” — **science says humans are more prone to rationalization by default**, because fast automatic processes and motivational pressures favor biased processing. But that propensity is malleable: incentives, training, accountability, and decision architecture can shift people toward more genuinely rational behavior. ([ScienceDirect][2])

Recommended short reading (start here)

* Overviews on motivated reasoning and confirmation bias (reviews/encyclopedia chapters). ([ScienceDirect][2])
* Dual-process reviews (why System 1 dominates unless System 2 is engaged). ([PMC][1])
* Empirical neuro/behavioral work on politically motivated reasoning and belief updating. ([PMC][3])
* Recent studies on debiasing/awareness interventions and their limits. ([Frontiers][4])

If you want, I’ll compile a concise annotated bibliography (1–2 paragraph summary per paper) of the most influential meta-analyses and experimental papers on motivated reasoning, confirmation bias, dual-process theory, and debiasing interventions — or extract quoted findings and effect sizes from the top 6–8 papers. Which deliverable do you prefer?

[1]: https://pmc.ncbi.nlm.nih.gov/articles/PMC8979207/?utm_source=chatgpt.com "Dual Process Theory: Embodied and Predictive; Symbolic and ..."
[2]: https://www.sciencedirect.com/topics/psychology/motivated-reasoning?utm_source=chatgpt.com "Motivated Reasoning - an overview"
[3]: https://pmc.ncbi.nlm.nih.gov/articles/PMC11412250/?utm_source=chatgpt.com "Tracking politically motivated reasoning in the brain"
[4]: https://www.frontiersin.org/journals/public-health/articles/10.3389/fpubh.2024.1414864/full?utm_source=chatgpt.com "The impact of confirmation bias awareness on mitigating ..."

If you instead assume that humans by default are irrational and prone to rationalization – the outcomes from your analysis will look very different.

With information asymmetry alone it’s not possible for people to be 100% rational 100% of the time even if they want to. What I have been suggesting is that you’re making a common mistake that many researchers make. You model the world based on assumptions that aren’t applicable to reality.

I remember back when all of the ivy league mathmatics pHDs thought they “solved” the stock market. That they could remove the ability to profit out of the stock market’s short term fluctuations. That the stock market should be nothing more than random perturbations on a given macro trend. They all believed they solved the problem and proudly proclaimed that it wouldn’t be possible for anyone to make money in the stock market anymore. Me? I giggle every time I think about Black-Scholes option pricing. If you model things without taking the actual reality of humans (not the narrative driven or bias driven reality of humans) into account you’re doomed to failure.

What you are doing is allowing your own personal bias to influence your assumptions. You are assuming that voters will be rational even though everything from academia and personal experience will tell you that’s not reality. What you are doing in this whole post is what I’m suggesting is normal human behavior and it’s what all voters are doing – implicit bias.

If my neuroticism levels were high enough my brain would be thinking along the lines of – “Why is he saying such an obviously untruthful thing?”, “He has an alternate motivation because clearly he isn’t grounded in reality”, “Who is he working with? What are they trying to do?”, “He’s not being rational” – yada yada – And this is how otherism starts and why it will never end.

What you see as corruption is just a narrative spun off your own bias.

Waggle Dancing.

To remove human irrationality you either need to get rid of the humans or build a common culture. Building a culture is a very much a waggle dance and it’s what you see playing out in opengov every day. The losing culture will hurl accusations, insults, narratives based on their perspective and bias. If manipulation of the crowd doesn’t succeed in building traction they will be forced to acquiesce, leave or regroup. If you want multiple cultures and multiple sets of shared values / vision – The system needs to be designed with that specifically in mind otherwise one culture will always win.

This is why I say, Silo human activity. You can silo the humans and allow for multiple cultures as long as you take a meritocratic approach. Voters can then instead of deciding on who to fund for what and why can decide what metrics they want to incentivize and for how much.

1 Like

I’m not expert here but I’m very curious about this question.

Isn’t our “on-chain plutocracy governance” about the same as the governance of any for-profit company?

As I explained above, the crucial difference is that company shareholding is subject to government laws and enforcement. Collusion and corruption in company law has real consequences – jail sentences and heavy fines can be threatened. On the other hand, in Polkadot, the consequence is at most just one won’t continue to get funded.

5 Likes

Government laws are only utilized by the in-group (winning culture) to punish the out-group (losing culture) or in extreme cases. Everything is selectively enforced. Government is the pinnacle of the processes that are going on in opengov. In the real world, the government harasses, imprisons, and kills all of or some of the people from the out-group while ignoring obvious rule violation from the “in-group”.

If that’s what you want all you have to do is wait.

You are not being rational and your arguments are not grounded in reality. You have almost reached the correct answer but your own bias and incorrect assumptions prevents you from fully seeing it.

Yeah I agree it’s also imperfect, but at least there’s the possibility to enforce rules. Coupled with democracy, you get a somewhat reasonable system that for the most parts work.

This is not possible in Polkadot. The ultimate punishment is only just that a person no longer gets funding.

System External Accountability Possibility to Vote By Feet
Polkadot No Yes, quite easily by moving to another coin
Company Shareholding Yes Yes, quite easily by selling shares
Government No Generally difficult

You need either external accountability, or make it difficult for people to vote by feet. Polkadot has none.

External accountability would only be implemented for a company in extreme cases or in cases where the company is owned or managed by people that identify or have been identified as belonging to the out group.

If external accountability was real – Hindenburg Research wouldn’t exist because the government would have taken care of all the problems. But, there’s money to be made in discovering criminal activity prior to anyone else even though the government have financial reports and statements given to them on a quarterly basis.

They called out financial fraud on supermicro in August 2024. Surely that’s been taken care of already, right? Right??? Well, we need them for the “AI revolution” so we don’t care. Are these the people you expect to apply accountability in polkadot?

We’re supposed to do better, not call on them for help. Instead of using a stick to disincentivize behavior we need to use the carrot and properly incentivize based on hard metrics. Token holders should only be voting on what metrics they care about and how much they want to incentivize a particular metric. Sub-daos should do all the activity, silo’d away from each other giving each the freedom to create, explore, and fail. Currently, we do not have a culture that tolerates failure but failure goes hand in hand with innovation. “It’s weird if it doesn’t explode, frankly” – Elon Musk

Relevant (I’m not taking sides here):

Worth noting that Naval is wrong, by definition, when he says token holders voting is democracy:

Is this AI? It was a bit difficult for me to parse the language – I felt like I was listening to a Bret Weinstein podcast.

Can you rephrase this? I don’t understand how it can be anything but a competition where one group, idea, or outcome wins, one loses or compromise is reached. Keeping with your example, the organism in this case is human who in their natural environment have conflicts and look to create systems that resolve those conflicts.

In my opinion, on plutocracy vs democracy, I’m not sure I quite see the difference. Mobs are dumb, whales are dumb and the resulting systems are structurally similar with the only difference being who you need to get to waggle your waggle dance. The actors behave very similarly. It’s primarily just humans doing human things the way humans evolved to do them. In our case though we have an opportunity to take a different approach because we don’t really need to solve complex human issues like who might be the best coder for a given task.

Why should any token holder care at all who gets funded for what? The reality is, they don’t they only care the network is successful. With that in mind all we need to do is implement incentives for success. People will go after the incentives and bring the success with their own unique ideas and style. But we can only do this because of the business-esque aspect to polkadot. It still doesn’t help us resolve all issues.

From this point it kind of vibes with what you’re talking about with homeostasis and complexity reduction. But I’m not sure that thinking about these things in such terms is helpful to anyone. The fundamental problem is humans doing what humans do and figuring out a system that allows humans to do what humans do without causing macro scale dysfunction.

Take this political experiment as an example of “homeostasis applied to political economics” implemented by Stafford Beer in Chile in 1971 ( I am not advocating to this project, but there’s great benefit in understanding the context’s needs and possible outcomes ) :

Fascinating, thanks for sharing.

My initial thoughts are…

  • I don’t understand how it would be possible to implement without control of the state (threat of force)
  • Assumes rational actors giving inputs
  • Policy decisions are still a “war of ideas”
  • Policy decisions (or decisions on representation) are still made externally to the system

To me it seems like the ultimate problem would be the same as Marx’s original faulty assumptions on where dehumanization would happen and why. If anything it feels like this is some weird hybrid that would accelerate dehumanization – not slow it down or prevent it. Treating them as cogs under threat of force while ultimately succumbing to control by corrupt / criminal elements as these things usually do. It feels almost like some sort of 1970s country scale management and control facility. It’s unfortunate that the experiment did not complete as it would have been very beneficial to see the outcome and attributing factors. I’ll need more time to play with this in my blockchain mental model :sweat_smile:

1 Like