Smoldot updates threads

Hello everyone,

As many of you know, I’m the main developer behind smoldot, a light client and alternative implementation of Polkadot/Substrate that can run within the browser.

If you are a developer, smoldot is available on NPM and, and is rather easy to use.
If you are an end-user, you can try smoldot by choosing the “Light client” option for example on PolkadotJS, the staking dashboard, and others.

I do a lot of changes to the project, but I don’t have any good way of keeping people up to date with these changes. There is of course a CHANGELOG, but it doesn’t cover everything and I assume that not many people go read it. For this reason, I thought that I’d open a forum topic where I announce note-worthy changes in smoldot.


The first change that I’m going to mention (which is why I’ve opened the topic) is that smoldot will no longer use a constant network identity (also known as a PeerId, which looks like 12D3KooWQz2q2UWVCiy9cFX1hHYEmhSKQB2hjEZCccScHLGUPjcc for example) like full nodes do, but instead will use a different identity for every single connection.

Before this change, if you connect to Polkadot (through smoldot) for example in Paris then fly to New York, you will still maintain the same network identity, and thus the full nodes you are connected to are able to know that you flew from Paris to New York. After this change, you will have a different identity. In the same scenario, the full nodes will see someone connect from Paris, then disconnect, then someone connect from New York.

The objective is, as you might have guessed, to increase privacy.

(link to PR)


Love this work @tomaka – I would like to chart a course for switching our indexing from RPC-centric past into a (hopefully more robust, private) smoldot future in 2024.

Would it be too much to ask for the simplest browser demonstration that shows the latest block/hash from as many parachains of Polkadot + Kusama as possible?

It wouldn’t have to show anything more than what this does:

Correct me if I’m wrong, but you need 30-40 x 2 specs to pull this off. The specs don’t change after genesis, but if bootnodes are unknown or have changed, you’re not going to be able to make it work. My expectation would be that any parachains are missing some key feature would submit a small PR updating their spec + bootnodes?

If you did the first 20 out of like 80 parachains (skipping over those that died or are about to die), we are happy to do the remaining going into 2024.

If the viability of the approach is blocked until BEEFY, ok, but we can do the menial data collection and community sheparding in the meantime as a prereq.

It’s unfortunately not possible to write a “simple” demonstration of that.

Smoldot acts as a replacement for JSON-RPC nodes.
Instead of doing something like const jsonRpcServer = new WebSocket('wss://json-rpc-server'), you instead do const chain = smoldot.start().addChain({ chainSpec: ... }).
Then instead of doing jsonRpcServer.send(request) you do chain.sendJsonRpc(request), and instead of doing jsonRpcServer.onmessage = ..., you do while(true) { const msg = await chain.nextJsonRpcResponse() }.

You can see an example and more documentation here:

Replacing a JSON-RPC server with smoldot is rather easy. However, querying anything from the chain (such as the latest block of parachains) is done using JSON-RPC requests, which is not easy, as it notably requires parsing the metadata.
Simplifying this is out of scope of smoldot, as smoldot is just here to provide low-level connectivity to chains and implement JSON-RPC functions. Instead, you are meant to use a higher-level library for such things. All such high-level libraries are unfortunately either non-light-client-friendly (PolkadotJS) or still work in progress. You could also write a high-level library specialized in indexing yourself, it’s not that difficult but it’s clearly not in the realm of “simple”.

The PolkadotJS page that you have linked only pulls information from the relay chain. The relay chain contains the list of all parachains and their latest block, and that’s what is shown on the page. For this reason, there’s no need for parachain chain specs.

However, if you want more information than just the latest block of a parachain, then you need to provide its chain specification, yes.

That’s true. However we’ve recently merged RFC 8 which would solve the problem by storing the bootnodes on the relay chain. It’s currently waiting for Substrate to implement this, which might unfortunately take a long time.

RFC 8 would also allow connecting to a parachain without knowing its chain spec, but in a semi-insecure way. It’s not clear to me yet whether this is something that will actually be possible in the future, but it’s a path being explored.

BEEFY would unlock the possibility to prove that an old block indeed belongs to a certain blockchain, as otherwise you could be lied to.
However it’s just the first step, and many things need to be designed and implemented on top of this.
It also comes with caveats: it’s not retroactive, so all past blocks are unprovable, and some chains might not have BEEFY enabled at all.

if one wants to write alternative implementation of smoldot in golang? where to start?

the light client specs gives a high level picture

There’s no such thing as an “alternative implementation of smoldot”. Smoldot is an alternative implementation of a Polkadot client.

It took me 4 years of effort and several hundred thousands of lines of code to write smoldot, while being already very familiar with how Polkadot works. So I guess the first prerequisite would be a lot of motivation. The second prerequisite would be to become deeply familiar with the spec if you don’t want to waste a ton of time refactoring because you have encountered an obstacle. You probably need to learn a bit Rust anyway, since a lot consists in reading Substrate’s source code. If, like smoldot, you want to make it work in a browser, then you need to architect your code specifically for this objective.

But if instead what you want is being able to use a light client from golang, then one can probably write golang bindings to smoldot in 2 to 3 hours (assuming familiarity with writing bindings).


This might not be 100% possible right now.

I’m very very close to making the Rust smoldot light client no_std, meaning that it wouldn’t need the Rust standard library. It’s something that I had in mind to announce in this topic as well, once done.

I’m not sure whether you can bind Rust+Rust’s stdlib and Golang, but if the Rust code is no_std then you definitely can.

Making the light client no_std also means that it could be embedded in very light devices, such as SoCs. At least in theory, because in practice you still need around 30 MiB to connect to a chain, mostly due to the chain’s Wasm runtime.

1 Like

@tomaka Thank you for a thoroughly clear explanation! Sure, all indexers like ours parse metadata, and its in the realm of “simple” for us, I understand smoldot wouldn’t do that.

Most of the tedious cases concern getting the latest bootnodes out of newly launched parachains and RFC #8 makes total sense! Can you not take the lead on implementing RFC #8 in Substrate? Who would actually be better that you can nominate?

At the end of the day, in the absence of the above implementation, there are like O(30-40 x 2) pieces of spec/bootnode information sitting around various github repos. We could stick all this data in another repo while waiting for the above, but executing on #1825 will be obviously better. This is important to privacy, and if CoreXYZ brings on many more paraIds, then its more than 80 pieces of information and having humans managing root-level specs/bootnodes in github is silly.

I’ve explained in the issue everything I know. There’s literally no other piece of information that I possess and that could be helpful. Anyone would implements this needs to know how (in the source code) the parachain node communicates with the relay chain, which, given how much stress working on Substrate’s source code induces in me, is not something I’m interested in learning.

This has just been finished, and the smoldot light client is now no_std-compatible. :tada:

Like is common in the Rust ecosystem, the smoldot-light library has a std feature.
In order to use the light client, you need to provide an implementation of the PlatformRef trait, which is what gives access to time, randomness, sockets, etc. to the client.
Enabling the std feature enables the DefaultPlatform implementation of this trait (based on the std and smol libraries), while not enabling the std feature means that you have to implement this trait yourself.

This guarantees that smoldot can be embedded virtually everywhere:

  • In any embedded device (however: while CPU consumption should be relatively low on average, you still need around 30 MiB of memory to be connected to a chain; this memory consumption could be improved, but we can never formally guarantee a maximum because the runtime of the chain could in theory allocate up to 128 MiB)

  • In any other program. The fact that every I/O interaction of smoldot goes through PlatformRef means that you can limit its CPU consumption or bandwidth usage if desired.

  • In any other programming language (that supports a C-style FFI, so basically every single language).


Congrats with the added support for no_std I’ve followed the issue since day one as we are prototyping with embedded solutions. One of our projects is related to running a light node in a small and as-cheap-as-it-can-get point of sale device that also acts as a LoRaWAN gateway to enable even smaller devices(e.g. a keyholder) in remote towns with bad internet infrastructure to “broadcast payments” without internet.

Looking forward to the memory consumption improvements. Maybe a parachain or whatever coreX abstraction builder having control of their runtime can optimize for a lower memory footprint?

1 Like

Being connected to the relay chain currently requires 30 MiB, so even if the parachain has some magic trick (which smoldot might or might not be compatible with) to use very little memory, you still need the 30 MiB for the relay chain.

These 30 MiBs mostly come from wasmi having compiled the runtime. Add lazy Wasm compilation · Issue #732 · paritytech/wasmi · GitHub would improve this, but it looks like a substantial change.

As I’ve mentioned, the runtime might allocate memory up to 128 MiB when you do a runtime call. In practice it’s way less than that, but even if your device has enough memory to do specific calls, maybe a runtime upgrade accidentally changes the profile of allocations and breaks it. Like most programs, the runtime doesn’t allocate precisely the number of bytes that it needs, but allocates big buffers then fills them partially.

A way out of this problem might be to simply not do runtime calls at all (and thus not even download the runtime). During warp syncing, we use the runtime in order to obtain some information that can also be found in storage but at an unspecific key. This could maybe be fixed by specifying which key holds this information. The runtime is also used in order to validate transactions and give user feedback, but we could simply not validate transactions and assume that whatever has generated the transaction is up-to-date and bug-free.

Even assuming that the runtime doesn’t use memory, it gets tricky to reduce the memory usage. One fundamental problem is that some pieces of data (storage values and transactions) aren’t merklized. What this means is that you can’t verify them in a streaming way: you can only know if the value/tx matches a certain hash once you have finished downloading it, which implies buffering the entire value/tx.

Overall, whatever the way forward to reducing memory is, it’s complicated.

1 Like

Hey tomaka,

Would it make sense to expose a linking function that gets called once a chain warp sync has finished? this would allow hosts to signal a reasonable start-time for chainHead_..._follow subscriptions.
If we subscribe it before the warp has synced we receive a stop event immediately after, rendering the initial subscription basically useless. (this is an expected behaviour as stated in chainHead_unstable_follow).

I’ve answered in client: Is there a way to know when GrandpaWarmSync finished? · Issue #1305 · smol-dot/smoldot · GitHub because the question is similar.

When it comes to chainHead_follow, I do realize that it’s annoying. However, it’s important to realize that smoldot has no robust way to know whether it’s at the head of the chain.

As explained in the issue, doing a warp sync doesn’t guarantee that you’re at the actual head of the chain.

There are basically three ways to know whether you’re at the head of the chain:

  • Assume that the full nodes it’s connected to are at the head of the chain, and thus that their best block is the current head of the chain. That’s bad because the other peers could simply be lying, or might be stuck, or might still be syncing, or there might be a netsplit, etc.
  • Using the Aura or Babe slot number found in the block header, one can calculate the timestamp when the block was authored. Unfortunately this is not a robust solution because it assumes that slots have always the same duration (6 seconds). This is true in practice, but maybe in the future we could change this duration, in which case this method will stop working.
  • By reading the block’s storage, there’s an item containing the timestamp when the block was authored. Be aware that in the case of a light client, reading the storage of a non-finalized block could give invalid data if a validator is faulty or malicious.

Only the third way is robust, but it requires parsing the metadata and all, and thus belongs on the client side of the JSON-RPC interface.

To me the way to do it should be to just try display whatever block smoldot gives to you, where “displaying” involves fetching the block’s timestamp and showing some kind of warning if it is too old.
I realize that it’s maybe not a great solution, but I don’t have a better one.

I just wrote a small proof-of-concept of C bindings for the smoldot light client:

The branch: Comparing smol-dot:main...tomaka:c-bindings · smol-dot/smoldot · GitHub
The header file:
Example usage:
As you can see it’s relatively straight forward to use (two thirds of the example consists in reading the chain spec file)

It’s however not totally clear to me whether there is a real interest in such bindings.

1 Like

For around a year now, smoldot has supported the libp2p WebRTC protocol which we want to use to replace WebSocket, as it doesn’t require node providers to buy TLS certificates and setup reverse proxies and all that complicated centralized stuff.

However, I’ve since then done several refactorings of the low-level networking code, and since I don’t really have the ability to easily test whether WebRTC support works, the code has basically stopped working over time due to accidental breakages during modifications.

A week ago, Kubo (the Go implementation of IPFS) has released a version with experimental WebRTC support. I tried it out and spent the last two days cleaning up the WebRTC code of smoldot, which now works again and can connect to Kubo.
Special mention to these two erroneous bytes which subtly make their way into cryptographic code, and it took me around 8 hours of debugging to find out the problem.

If we now just implement Kademlia properly in smoldot, and assuming that Kubo enables WebRTC by default in the future (I have no idea of the time frame of this, however), and with a few other protocol implementations (Bitswap), then it should be possible to access IPFS from the browser.


After Use only one networking service for all chains by tomaka · Pull Request #1398 · smol-dot/smoldot · GitHub, smoldot will now use a single networking service for all chains.

What this means is that multiple block announce substreams of multiple different chains can now be opened using the same WebSocket or WebRTC connection.
So for example, if a full node were to be connected to Polkadot, Polkadot AssetHub, and Polkadot Collectives at the same time, we could use this full node to sync the three chains using only a single connection.

This is something that I’ve had in mind to do since July 2021, and it has necessitated a lot of refactoring of the networking code. This was done rather slowly, as this is a low-priority issue.

In practice, this is not super useful, because the official client only supports one chain at a time (connecting to a parachain is basically two clients into one), but fundamentally it is the right thing to do. And in the future, if we make light clients connect to each other, this will definitely be useful.

One obvious pitfall is that smoldot doesn’t try to optimize for this feature (at least yet). When smoldot chooses who to connect to, it doesn’t take into account existing connections. For example, when you’re syncing Polkadot from Alice, and Alice is also able to serve Kusama, smoldot might instead prefer to sync Kusama from Bob rather than Alice, even though it would save resources to use Alice.
It is however unclear whether this is a good idea to optimize this, as it might facilitate eclipse attacks.

Thanks to @RobinF (the maintainer of wasmi), smoldot will soon compile the runtime of the chain lazily. Only functions that are executed are actually validated and compiled, which is very beneficial for light clients as they typically only execute small getters rather than expensive functions such as executing a block.

This reduces by around 200ms the time it takes for smoldot to reach the head of the chain at initialization.
Since, during that time, the end user is generally waiting in front of a blank UI, every millisecond counts!

Link to PR.


One of the milestones of my Q4 treasury proposal is adding support for warp syncing to the full node.

When I added this milestone, what I had in mind is to implement it the same way as Substrate does: the full node would download a warp sync proof from the network, which proves that the latest finalized block is a certain block, then download the state of the chain at this particular block, and then actually start syncing.

This approach, however, has a drawback. Other nodes prune the state of blocks after a while, and thus if downloading the state of a block is not fast enough, it might be that the state of that block gets pruned and is no longer downloadable.

While it doesn’t seem to be a problem right now, if we imagine a state of 10 GiB (which isn’t that much), and that the state of a block is pruned after 1024 blocks, you would need an average download speed of 1.67 MiB/sec in order to download everything in time. It is therefore not an imaginary problem, but something that can really happen.

Consequently, I’ve decided to take a different approach in the smoldot implementation.

Just like Substrate, smoldot will download a warp sync proof and download the state of the block it has warp synced to, but contrary to Substrate, it will also immediately start syncing more recent blocks in parallel of the download.

If, when verifying a block, the runtime accesses a storage item that hasn’t been downloaded yet, smoldot will prioritize the download of this specific item, so that the verification can be performed as soon as possible.
(I’m simplifying a bit, as in practice we’d ask for a call proof and not just a single item)

Verifying a block gives the diff in the state between this block and its parent. Any item that is not in the diff is therefore identical between the parent and its block.
This means that once a block has been verified, we can continue downloading the state but this time of the child rather than the parent.

Using this method, the full node is therefore able to reach the head of the chain much quicker, and in a reliable way.
It is not 100% robust, because you still need enough bandwidth to be able to keep up with the chain, but this is currently only around 3 to 5 kiB/seconds.

The drawback is that this is more complicated to implement, but at least I know that I won’t need to revisit it later.


Two of the three milestones of my 2023 Q4 treasury proposal (which covers until end of January) have been implemented, and I’m going to be working on the third one (the new JSON-RPC API for networking) soon.

Given that it takes a long time between the opening of a proposal and its acceptance, I am usually opening a new treasury proposal during the last month of the previous one.

This time, I will not be doing that. Over time, I’ve realized that setting milestones with precise timelines is very constraining when you’re a lone developer. It regularly happens that the need for some preliminary changes or some high priority fix arises, and I have to implement things in a not-so-clean state or delay these fixes in order to meet the deadline.

Therefore, as hinted in the previous treasury proposal, I will be switching to a model where I do the changes first then ask for retroactive payment, similar to what a few other projects are already doing.
Additionally, once the fellowship (of which I am a member of) salaries start getting paid, I will be considerably reducing the amount asked in my treasury proposal, or even completely stop them.