What about a "Validator State" weekly report?

NPoS is a very innovative yet challenging process for most nominators. On the other hand, the best validators, who invest heavily in the network, can struggle to gain the proper nominator’s attention. What if we could help create this social layer by providing a weekly top list of the best and most decentralized validators? This list would highlight those with the most consistent connectivity quality and those who update their node software promptly.

We propose creating a Polkadot validator state report with the following information published weekly:

  1. Top 10 validators (from Decentralized Node) of the week.
  2. Overall DN performance difference from the last week (based on nominator inflows/outflows), highlighting some leaders and outsiders.
  3. Attention points: validators (in DN) with risks, radical fee changes, etc.

After verification, these reports can be shared on existing community channels such as Polkadot or the Web3 Foundation’s X account. This can help create a social layer between nominators and validators and promote the best validators to attract nominator attention (and fulfill DN goals for sustainability post-DN wave). This could be especially useful for large nominators familiar with validators from the start, potentially using this data to gauge performance.

The main expected result is to stimulate the core NPoS activity: helping nominators choose the best validators and spurring validators to compete for attention with the best hardware/software/operations.

How this list can be created: I personally believe that we should use a combination of telemetry data from the last era + logs to check consistency, current node hardware, software updates (statistical delays), geographical distribution, and on-chain data (points, fees, slashing, etc.). Community input on this would be appreciated!

Additionally, the report can highlight anomalies to monitor, such as slashing events and radical fee changes (such as this important post about stake.su x.com )

I would love to receive community feedback on this topic!

PS. To comply with regulations, this report will not have a call to action to stake. Instead, it aims to draw attention to nominators and validators to help them perform their NPoS tasks effectively.

PPS. Also, this data can focus on the entire validator set (not just the Decentralized Nodes) to promote and compare across the whole set.

PPPS. Thanks to Paradox and Michalis for helping refine this idea’s roots and focus on the goals.

2 Likes

If I understand, Decentralized Nodes rotates nominations slightly slower than 1kv, making weekly DN performance metrics even less meaningful than 1kv performance.

Top 10 validators should not make much sense. It’ll make some sense once Correct validator rewards by burdges · Pull Request #119 · polkadot-fellows/RFCs · GitHub happens, but really that’s just hardware specs plus bandwidth.

That said…

As part of Correct validator rewards by burdges · Pull Request #119 · polkadot-fellows/RFCs · GitHub we really should talk publically about validator performance metrics, even ones that do not impact rewards, like brief no-shows. Also, if we implement availability data collection, but not the rewards or tit-for-tat game.

This data could be displayed somewhere for all validators, with then some subset list box for DN. A subset list box for DN maybe useful elsewhere.

Also…

DOTs are a tool with which nominators do the work of choosing the validators. It’s dangerous if people think recent rewards represents quality nominators. Anyone can run ultra low commission for a year, then hike the commission and hope nobody notices, or worse attack the network.

As a network, polkadot wants validator operators to be honest, competent, and independent aka small.

As a nominator, you obviously want honest and competent nominations. Although not everyone realizes this, you want independent ones too because this reduces slashing when mistakes get made, and increases the odds of governance refunds.

It’s obviously wonderful if a validator operator speaks publically ala blogs, twitter, etc about operational security, but that’s pretty rarified. I do not specifically mean polkadot opsec here either, merely ranting online about telegram being insecure is better than nothing. lol. Again though not realistic.

I’ve always thought physical meetups would make the most sense, because then nominators could meet local validator operators in person. I suppose our pre-covid general meetups were kinda a money pit of little benefit, and then covid killed them. We want validator operators in unusual places too, which prevents or complicates meetups.

Tor does this, but Tor has more operator invovement tiers. Afaik, tor exit node operators were the main attendees at tor relay operators meetups, becausetor exit node operators needed to plan things like “how to explain tor to cops.”

Yet still, there maybe something one could do around physical meetups. Assuming treasury does not fund meetups now, then if polkadot ever restarts meetup funding, presumably using treasury, then we could ask that meetup funding applicants obtain statements from a few DN operators that they’re nearby and interested in attendng.

1 Like

Yep, we plan to provide the validator feature as one of the main ones for Motif. We’ll gather the data, collect nominators’ diffs, and so on. So far, this info will be available through SubVT, Polkassembly, and Web3Alert. Also, it will be accessible via API or within BigQuery for any ecosystem app. So yeah, not only the top 10 in general.

Regarding supervising validators’ quality: from my perspective, nominators should be supervisors of the P2P network quality within the NPoS framework. They can help to fulfill the goal without centralization or any restrictions. This supervising is quite hard to do (and boring) for most nominators. So the idea is to provide some digest to make this easy and fun (for the most). As long as there are no calls or motivations (inside that digest) that might centralize, it can help to keep decentralization in place.

As one of the channels

Great idea, I think!

I have been planning a similar report for a (Substrate-based) chain where I participate and problems I see (in “my” environment) are slightly different than on Polkadot/Kusama so I plan to focus on different things.

Most of what follows may not be applicable here, but maybe you or someone will like some of my sharing.

The need I see (and I know there’s maybe 5-10 people like me in my community, so it’s meaningless in the grand scheme of things) is for weekly updates that focus on highlighting bad and good validators.

I plan to do this for free and not seek any tips/funds from my chain’s DAO because that allows me to write biased reviews and at least get some kick out of doing that. (I also happen to think the DAO shouldn’t spend on subsidizing such work - if it’s valuable enough, it will get enough tips or find subscribers, or at least more free contributors).

Thoughts:

  • It’s easy to report “best” validators, but that’s already on-chain
  • “Top N” contributes to crowding, as everyone who nominates tends to sort descending by ROI and picks some from the top 30, leaving the rest less well-nominated & contributing to centralization
  • It’s hard to identify & report on truly good validators. One has to analyze semi-manually and look across a range of measures that the report author thinks matter (as it should be, since I’m not even going to try & be “independent”).

After months of on-and-off observing and thinking about these things, I’ve identified some bad and good patterns, and I’m going to completely avoid Top N, and only report on under-nominated but (IMO) deserving “mid-range” validators: not the worst, not the best, but “good”.

What’s good? IMO that’s validators who respond from IDs in their on-chain identity , maybe some have additional attestations, those who are active and positive in social media, have a good voting record (not just participation, but also how they voted), contribute in the community, and so on.
Some of it can be automated, most of it can but it’s not viable, and some has to be manual.

My main idea is to highlight and promote validators who I think are deserving, but not near the top, and appeal to nominators who don’t mind earning a “non-top” ROI if they know why they get 2 or 3% less.

Nominators who care only about ROI don’t need to read anything else but perf indicators, so there’s no point trying to do what they do possibly much better than I can.
As a validator-nominator, do I care if I make 3% less? If I cared I wouldn’t even be here, let alone work for free on a newsletter like that… I care about countering centralization, and I think there may be others like me.

I also plan to have a “wall of shame” section, because I’d like everyone to know who “Bottom N” are (in my subjective opinion). I have several ideas about this, but it’s mostly the opposite of the “good” ones I describe above, plus some behavior patterns that I find unhelpful for my community.

I also plan to add some “gamification” for validators and nominators who want to participate, to get contributions from other community members and motivate them to engage in valuable activities unrelated to validation.
The reality is most “professional validators” don’t care, many run several chains and have no clue about much beyond getting their rewards out to dump them. That’s what “focused validator” means so I don’t judge, but I’d like to nominate sometimes less-efficient validators who participate in the community (I already do that with my stash).

In my community for-profit nominators don’t care about the best h/w. Steady, high ROI is the only thing they care about and that info is easy to find and access. It can (it is, but it’s not always working perfectly well) completely automated and hands-off - there’s no need to read any reports as far as I can tell. [1]

This tendency to focus on ROI also happens to promote centralization (both geographical and validator-related, i.e. pools) which I think works against my community.
NPOS can’t counter that, only education and persuasion - that “best” validators aren’t necessarily top-ranked validators by ROI - can do it.

[1] I used to have my own validator selection tool that was just data gathering script with a simple SQL query, but one that worked better than the “official” staking tool because it dropped certain types of validators I disliked, while the official tool did not. But later I realized I should simply nominate validators I knew from online community activities, so I dropped that automation drive. (I don’t tell them I do it, that way it’s easier to drop them if they do funny stuff, but so far I’m much happier than when I was nominating best-yielding nodes).

1 Like

I shouldn’t comment on this off-topic part, but it’s already in comments so…
My community runs a mixnet that uses a Substrate-chain for its consensus mechanism and governance, so it has similarities with both Polkadot and Tor.

I’ve been validating on my network for 4 years now, never met a single (other) validator. Maybe I’m anti-social in real life, but I don’t particularly want to meet fellow validators. I’m active online and share info almost every day and that works well enough for my needs. I know some people prefer in-person meet-ups, but some don’t.

Related to Tor and OpSec, knowing other operators isn’t necessarily a good thing. If I visited a mixnet operator event and realized a bunch of people knew each other i real life, I’d find that isturbing. I also wouldn’t want to make it easy for someone to come to one place and ID 60% of all validators.

OK, that was off-topic, let’s move on.

Yep, I support that view very much, and that is the main reason why I’ve been going in this direction myself - simple stats or a mix of stats (with some penalty points for pulling the rug, changing commission by > 5% in a week, etc.) still can’t tell the whole picture.
I want to appeal to biased nominators with shared values.

That may or may not be a big deal, depending on how often one checks.
If someone charges 1% for 1 year and then suddenly hikes to 100% - as long as you skim through a validator newsletter just once a week, it would take you some 4 eras to find out and de-nominate them, which would be a small, 1% drop in ARR for the year. Still a better deal than someone who is perfectly consistent and runs at 2.5% :slight_smile:
But you’re right in the sense that people do pull off long cons and sometimes victims don’t realize for weeks.

In my community I campaigned for a change in our staking tool, to drop any validator who has had a 100% commission at any point. Of course, now they can do 99.99% or move to new wallet. We know that doesn’t solve anything, but it will slow them down for a while. I’m now considering modifying the staking tool to be able to load blacklists (e.g a CSV, like adblock filters) or whitelists, which - if implemented - would ultimately enable multiple authors to promote their own version of “best” and “worst” validators.
IMO that would be a big improvement over the common simple analysis that assumes nominators only care about ARR.

We intentionally allow validators hike commissions because we prefer if dishonest validators “attack” their nominators instead of attacking the network. It simply proves those nominator have not done their job correctly.

Also, nominators loosing rewards winds up only being a minor wrist slap. If the validator chooses to get itself slashed in an attack, then the nominators could loose much more.

1 Like