Ethical rating DAO to rate Web2 services

I have an idea for a new DAO and I’d be happy to receive feedback (technical and from a user perspective).

The idea is to have an ethical rating for online services (websites/apps), where every user can submit his/her rating and gets displayed the overall rating of all users. It works similar to existing rating systems for hotels or restaurants. The rating consists of 4 categories (which are subject to change):

  • Privacy

    Does the service preserve my privacy? For example, do I feel like my data is shared with other services?

  • Clarity / User friendliness

    Is the user interface clear? Is it easy to find the relevant information? For example: having advertisements at locations, where users easily click, can be viewed as bad clarity.

  • Child safety

    Is it safe for children of any age to use it?

  • Bubbliness ↔ Relevance

    Does it put me in a filter bubble? Only showing content, depending on my previous behaviour? Or does it apply a good mix of personalized content, but also general content that is not related to my previous online behaviour.

Some characteristics:

  • Democracy: Anyone who gave more than 10 ratings has the possibility to vote on:

    • the number of categories
    • the name and textual description of each category
    • the amount of ratings needed to participate in democracy

    This also means, that after such a vote, users might want to change their ratings.

  • Wallet/User-Setup: The current idea is to have yearly subscriptions for $1.

    Not sure yet, what the best way is to set this up. But once the user has paid, they should get the necessary tokens to participate and be a valid user for a year.

  • The application would be financed by the incoming subscription fees.


  • The rating system will only unlock its full potential if an account can be linked to a unique natural person. As long as this is not possible, it can be manipulated.

Why blockchain?

  • A rating of online services should be organized as an independent organization, that can be trusted. This could also be a classical trusted organization like an NGO. A DAO seems better suited, as it can adjust itself faster to rapid changes in technology. Maybe tomorrow a new ethical problem is found in today’s systems. Then a new rating category can be added in a democratic way, by the users themselves.

Why would this be needed at all?

  • Regulation on Web2 services is always lacking behind technology. As a society we need to become faster to react. Web3 offers great opportunities to build democratic processes, that can set boundaries. We give control back to the people. In the first step, it’s only a rating system. At a later step these ratings could, e. g. be used to block certain services. A simple usecase could be a child-safe browser blocking websites, that are not child-safe. Approaches like this already exist, but they are currently governed by big companies and not decided on in democratic ways.

I think such a service would be very valuable and could potentially reach users way beyond the Dotsama and even crypto bubble.

The neutrality and trustworthiness of such a DAO, however, would largely depend on the sybil-resilience of such ratings.

But irrespective of sybil-resilience, I believe we’d need a way to make arguments. How should I rate the privacy of the signal messenger on a scale from 1-10? It’s certainly pretty good for p2p communication, but if you join group chats you’ll share your phone number with all members of that channel.
We need the possibility to add nuance from competent users, not simply a scalar rating.

Also, we should make sure that minority opinions aren’t just drowned by taking the average rating. The polarization of ratings should somehow be displayed.

Not sure if it can be applied here, but it’s an interesting read about how (not?) to approach polarization anyway: Vitalik on X’s community notes