ChatGPT, Brigading and Anon accounts on ecosystem forums

An interesting recent development in the ecosystem is perhaps unsurprising, and that is the emerging use of anon accounts along with ChatGPT to create thoughtful (sounding) responses to proposals.

It has not been hard to spot - here, here, here and here but it is an emerging trend that will continue being refined, until it is impossible to tell which comments we should consider ‘valid’.

We have seen brigading (mass comments from associated organisations/communities) in an attempt to influence voting outcomes, both on the positive side and negatively.

This is again a pattern that is easy to spot, simply by the number and style of comments on a proposal. In general its great to have people on your side, (not so much if they’re not on your side) but if we want to optimise for intelligent adoption, rather than just dumb governance engagement, we need to design differently.

The good news is, all of this plays to the weaknesses of existing social media, and to the (advertised) strengths of the technologies we are all building on.

Going forward we need to have some assurances of on-chain identity, reputation, voting history and indeed conflicts of interest when it comes to those supporting proposals - since we should weight the opinons of those with a vested interest in either seeing a proposal succeed or fail.

Currently conflicts of interest abound and there is no way to guage the credibility of statements - as all comments and accounts are effectively in the same class - as a result it is increasingly difficult to sift through the noise, and to make governance a collaborative rather than an adversarial affair.

If we cannot fix our own problems, then what hope do we have of engineering way more complex solutions?

We’re working on ideas in this space, with a range of partners from across the ecosystem, but wanted to highlight once again there are very basic problems in the ecosystem, which might not seem sexy, or novel, or marketing announcements, but are very real problems, that will only get worse with the likes of ChatGPT etc.

Lets dog food our own problems.

While I would personally never use a chat app to write my posts, I don’t think it’s a problem at all. After all, that’s what they’re for–to make writing easy–and if a community member posts something that appears to have been written by ChatGPT, I think it’s safe to say that it represents that community member’s views (after all, the community member prompted it and then posted it).

In fact, I think it’s actually a positive, since it makes it easier for us to hear from people who may not be English-fluent or may have other reasons not to be comfortable or confident expressing themselves in writing.

And please don’t use ChatGPT to make posts in this forum. I consider them very inappropriate. The only exception is that you can make it actually indistinguishable to post by human.

ChatGPT is a tool. How it is used and to what ends is what matters… using it alongside multiple anon accounts to present many ‘alternative’ perpectives is not.

1 Like

The meta-topic truly is about which criterion should be taken as a measure of power in governance. And it’s fascinating to see how perspectives differ depending on different people, probably due to our own biases.

I do apologise for oversimplifying what are necessary complex views here for the sake of argument, but Rich seems to think power allocation should better be based off some combination of rhetorics/reputation so that what matters is to have some system to ensure personhood, and the best talker has more power. This is not so much different from our existing democratic systems.
MrCole is also about power to personhood, but it does not matter what the personal qualities are to win an argument, and the best use of technology matters. People with less ability in rhetorics shall not be at a disadvantage and can use technology to compensate. Fair enough! It’s a premium to the smarty-cool techie.
“Alice”, of course, is about whatever method provided it serves the end outcome she pursues. It’s probably loosely related coin voting asymptotically since the number of anons she could create is limited by the number of tokens controlled. But this is not quite equivalent either.

In the absolute, none of those choices for power measures are necessarily better than any other one could think of. Eventually every participant will be biased towards what metrics serves him best, and complain about others not following the “fair” rules.
About the idea of "having it “sound like human”, it’s already possible to do much better at text generation. And layer retraining can fool any automated detector even though it currently requires some infrastructure to do so easily.

So there is no fair rule that should favour any quality in governance, for deciding on one is instantly and intrinsically unfair. And we have to be aware that this medium will soon be a theatre where bots will be at each other’s throats all the time.

I am leaving this to people cleverer than me to fix this, and I wish them the best of luck, especially if the goal is to get this fixed before it gets out of control.
On my side, the question is more about getting aware of, and attempt to avoid falling down those traps during my individual decision making process, as this trend is here to stay, and we should learn to deal with it as best as we can.

1 Like

The meta-topic truly is about which criterion should be taken as a measure of power in governance. And it’s fascinating to see how perspectives differ depending on different people, probably due to our own biases.

Great point. You’ve also helped me clarify my own thinking a little more wrt to the challenges we face now and going forward.

I was aiming to make a point about (existing) ecosystem forums - e.g. here, Polkassembly etc, since these spaces are currently the primary off-chain → onchain interfaces, where necessarily subjective ideas transition via binary y/n referendums into objective on-chain events.

Pressing a little more on your insight around the three different ‘perspectives’ - right now, there is an adversarial nature to the positions - e.g. people will use whatever skills, tools, resources at their disposal to get the outcome they want.

The danger of current designs, is all sides are pitted against each other - as you note:

What becomes interesting is when we consider how the current journey towards zero-sum outcomes of this relational dynamic, can be reoriented into positive sum games for all involved.

Stepping through:

  1. Ideas/rhetoric are important, but uninterrogated ideas will never be as potent as those that benefit from (friendly) critique - at their best, we should be designing public spaces that allow us to test out and refine concepts, positions and arguments. Identifying and unravelling contentious subjects is healthy - and vital for the culture - and ultimately where we’ll discover valuable insights.

  2. Technology - aka ChatGPT in this case can allow all the things @mister_cole identifies and more, ultimately it can level the playing field in some ways, remove admin but this is just like rhetoric - a skillset that can be used for good and bad wrt the goals of the system - which leads us to…

  3. Outcomes - per your point about Alice, this is essentially leveraging all available resources to get the win. This is analogous to the sports team who win at all costs - it might not be pretty, but in the end, they lift the trophy. However we can push on this ‘absolute’, since short term wins, may well not optimise for long term health of a (political) system.

Which brings me back to the perhaps the main point:

if we want to optimise for intelligent adoption, rather than just dumb governance engagement, we need to design differently.

In this case, we aim for positive sum games - where the relational dynamics and incentives of the three perspectives you identify still operate under tension, mediating the power of each approach, but ultimately making us all smarter, and with better long term decision making capabilities…

This recent podcast between Lex Fridman and Tim Urban of Wait But Why brings alive the point really well when they discuss echochambers vs idea labs.

We’re also developing Root - a new decision-making protocol that takes some of this stuff forward - we’d actually begun discussing how ChatGPT3 can become part of the process.

Some talk of this in the recent Attempts at Governance (AAG) starting here. It was suggested that users flag posts that appear to be chatbot-generated, and that seems reasonable to me.