Canonical Name Resolution on Polkadot; Reviving PNS

Hi all,

Long-time lurker, first post.

Background: I’m a systems engineer who’s been building in the Polkadot ecosystem and decided it was time to start shipping the ideas I’ve had sitting on the backburner.

I’ve been working on canonical name resolution for Polkadot. While researching the space I came across PNS and the pns.link frontend… solid foundation, but the repo has been untouched for ~4 years.

I forked it, brought it current on Polkadot SDK 2603, and have been extending it.
Current additions beyond the base fork:

-Name marketplace:buy, sell, and transfer registered names
-Custom Attributes for other projects

Amongst other things.

The core use case is forward lookup / canonical name resolution, basically DDNS for the ecosystem.

My question for the community: what are you currently building that would benefit from decentralized name resolution?

I’m in early design on several dependent projects and want to understand what integration points, if any, matter most to active builders before I finalize the architecture. Really more or less looking to align with current standards instead of creating custom attributes no one needs except for me. Happy to share more technical detail on the pallet design if there’s interest.

Thanks for looking

1 Like

There’s no point to doing DNS if it’s not global (icann supported). Fortunately, we have options.

While most other networks are partnering with a centralized corporate entity for icann compliance we don’t have to. With the polkadot community foundation (PCF) it’s possible to create an owner-less company with management appointed by governance to act as a POC for ICANN communication and integration. A single point for domain transfers (of web2 names on/off chain) and would allow us to purchase / register custom ICANN gTLDs ( .dot .ksm etc ). This would allow the on-chain system to receive incoming transfers, performing outgoing transfers, register gTLDs, and more.

You could think about like web 2.5, a bridge between the web2 and web3 worlds. The system doesn’t technically need ICANN to exist or be used, but without it, only those who operate custom resolution systems will be able to use it because ICANN does not like their authority challenged so they’ve forbidden non-icann domain support.

Over at ibp-network repositories · GitHub we have built a system to provide dns resolution, health checks in a decentralized way … A bit hacky and coded to our specific needs but it should give you the idea.

  1. ibp-geodns – pdns remote plugin for custon resolution – Reads status via pubsub
  2. ibp-geodns-monitor – Monitor setup something akin to gcp health check (ensure system is up, if not log the outage) – Writes via pubsub
  3. ibp-geodns-collator – Monitors pubsub and records events to display via api – Reads via pubsub
  4. ibp-geodns-dashboard – dashboard that connects to collator api
  5. ibp-geodns-agent – Self attested monitoring application which would be the equivalent of zabbix active agent – writes via pubsub (still a work in progress need to spend more time on this)

So – What do we need?

We need to make sure that the entire stack needed to launch a project on chain is decentralized and that it’s possible to use it on existing systems for backward compatibility without tinkering as well as being independently available via web3 directly. This means 1) Domain 2) SSL generation and database 3) Hosting (Apps / Filestore) 4) Communication ( Email equiv / IM equiv / Discord equiv ) 5) RPC accessibility (light clients)

Eventually the goal should be to rebuilding networking itself to use an onion style client determines route packet forwarding system with innate layered encryption.

Where is the DNS lacking? We can register non-ICANN-complaint names on various non-dot chains already and no one uses them because they’re not ICANN complaint / integrated. We need a market place for dns hosting so it’s possible to register or bring a domain on chain and then have a market place of available dns hosts to choose from that will provide the dns hosting. This is where reputation becomes key and an onchain reputations system is needed – because as a dns host you can give whatever results you want but without the dns hosts it’s not possible to have a system that is usable without a custom client. The other option is have the PCF subsidary perform all dns resolution.

Storing all these health states on chain is a bit of a mess – there are a couple other ways to do it – to filestore, keep using pubsub and only record confirmed offline states on-chain and make pubsub use signing for auth and messaging to verify authenticity, or store them all on chain and just prune it every other era or whatever so we only keep as much state as needed.

For storing the zone data that should be kept in a file store and encrypted such that it’s possible for dns hosts to decrypt, the client to encrypt & decrypt, and for no one else to be able to decrypt. Basically RBAC turning an on-chain filestore into a permissioned & encrypted S3 store.

There is security through obscurity and it should not be completely discounted – I believe most system admins would choose to NOT leak data that may help an attacker.

1 Like

We don’t need ICANN, we don’t need PCF. We need browser support like brave for ENS.

Here’s our Kusama project for commit-reveal voting on IPFS. Today we have to use ENS. PNS might be preferable:

Browser support would work, yes. It would also create a support nightmare. You have to build things for the lowest common denominator possible.

I agree with the direction here. If DNS is meant to be globally usable it ultimately needs to interface with ICANN and the existing DNS root. Otherwise it becomes a parallel naming system that only works for users running custom resolvers.

Currently, gateways like eth.limo provide an important intermediary step. They translate decentralized names into something browsers and existing infrastructure can resolve, making systems like ENS usable today. Operating eth.limo alone already costs around a million dollars per year, including substantial legal and compliance costs.

The next step is creating the first decentralized owned TLDs, governed through a DAO, with infrastructure that integrates DNS resolution directly with the onchain registry. This goes beyond eth.limo by making ownership, governance, and resolution fully onchain, while still providing a global DNS layer so domains resolve normally on the internet. The cost of building and operating this fully integrated system would be substantially higher than a single gateway.

Even with fully onchain governance, shared infrastructure is still needed to make the decentralized registry usable for everyone on the existing internet. Running a reliable global DNS and gateway layer is expensive. While the system could be partially funded by domain sales or other onchain incentives it is unrealistic to expect a large number of buyers. Because it is essential shared infrastructure it still needs a sustainable funding model and a clear governance system.

ICANN requires WHOIS entries. That contradicts unpermissioned name registries.
We need light clients in browsers to resolve PNS. Everything else is make-believe decentralization IMO

DID required for purchase

And who attests these DID? KYC/KYB intermediaries? Self-signed won’t do for ICANN

Who attests the KYC for the existing entries?

In theory the registrar, right?

To my knowledge, the only hard requirement is “email address that works” and the name, address, etc is already self attested. I have never done physical address verification and I have many domains where the address on the whois doesn’t match the address on the credit card. You also have services like domain by proxy that anonymize the whois but pass through emails. Clearly that’s permissible by ICANN.

For DID, there are multiple ways to go about it and none of them require some contract with deloitte or whoever. The PCF sub could also operate automated verification for those points that need verification.

If a browser natively ran a light client it could in theory provide fully decentralized resolution out of the box. In practice this is extremely hard. Distributing updates for multiple chains, syncing headers, and handling proofs would bloat the browser, and it is unlikely that browsers will natively run light clients for many chains. A plugin can handle some of the heavy lifting but adds friction, requires installation, and does not work on mobile. Loading a client from a web app served through a gateway avoids those distribution and performance issues but shifts trust to the gateway. IMO, eth.limo is a good balance. It is a gateway, so not fully trustless, but it is seamless because it integrates with existing internet infrastructure and provides fast ENS resolution without requiring users to run heavy clients locally.

It is relevant to note that eth.limo, since it’s a centralized gateway service, faced legal challenges regarding privacy:

Operating a name server, since you don’t distribute content, just providing name resolution has a much narrower legal scope regarding privacy.

Thanks for the detailed breakdown. This is exactly the kind of critical analysis that sharpens a project’s direction.

On DNS encryption: public naming is public by design. DNS has always been unencrypted and transparent… that’s a feature. The trust model for a public naming layer in this context is consensus, not encryption. Any node serving resolution is serving consensus-verified chain state, so the data integrity question is already answered by the protocol itself.

On the resolver trust problem: rather than a host reputation marketplace, the more robust pattern for end-user apps is querying multiple independent RPC endpoints and requiring agreement before accepting a result. No single compromised node can poison resolution. Reputation can still exist and be informed partly by end-user feedback, but the primary trust mechanism is redundancy and consensus verification, not a centralized score.

On scope: the Web2.5 bridge is a valid and important goal, and the PCF path you’ve outlined makes sense for that. But there’s a real and immediate need for a Web3-native naming layer that works within the Polkadot ecosystem today: wallets, explorers, RPC discovery, app routing. One compromised address in a single-point resolution system takes everything down. A decentralized, consensus-backed naming layer is the fix for that, and it needs to exist independently of ICANN.

The bridge and the native layer aren’t in competition; they serve different timelines and different users.

Really interested in making that switch happen. A few questions to make sure I build exactly what you need if you don’t mind…

What are you currently using ENS for specifically? content hash records pointing to IPFS, wallet resolution, or both? And is the commit-reveal process itself touching ENS or just the frontend routing?

On the browser side: is Brave’s ENS support sufficient for your users today, or are you hitting limitations there? That helps me understand whether native browser integration is a hard requirement or if a wallet/extension-based resolution path would work for your use case.

PNS already has content addressing hooks and the record types are extensible. The ones I build out are for other projects Im working on that depend on name resolution.

I want to make sure whatever I build for IPFS integration matches your actual workflow rather than my assumptions about it.

Thanks for the comment on this thread

Just looked at the link, CoReVo is a great example of exactly what PNS is built for. Step 3: “announcing your public encryption key on-chain” maps directly to dedicated key slots in PNS records. Rather than stuffing keys into text records like ENS, PNS has typed key slots designed for that purpose.
Group identity and membership discovery could also be handled through PNS names rather than raw addresses. Interested in what your current pain points with ENS are for this specific workflow

Naming is not public by design. You have to know the name to see if it exists. The only way to get every subdomain under a domain is to do a zone transfer request to the authoritative name servers. If those are properly configured they would refuse the request. With AXFR zone transfer disabled – brute force is required which can be easily mitigated. Security through obscurity is a thing and you should never help your attacker.

Yes but then you can’t make it 2.0 compatible because a 2.0 client can’t look up a domain hosted on-chain. The auth dns and reputation of those selected for auth dns by domain owners is a requirement for 2.0 compatibility. The whole point of 2.0 compatibility is to make it work with everything out of the box without modification. That 2.5 bridge along with dns hosting market place provides that. Yes, 2.0 trust-based sucks, 3.0 trustless good. I’m not sure how you square transparency with privacy in the case of domains. On 3.0, I should be able to make a subdomain and not have everyone know automatically that it exists. Personally, I liked the idea of having some sort of filecoin K/V store where the storage node is handling DNS lookups (with some work these could handle 2.0 traffic).