In this post, we’ll review the latest updates to the litep2p network backend and compare its performance with libp2p. Feel free to navigate to any section of interest.
Section 1. Updates
We are pleased to announce the release of litep2p version 0.7, which brings significant new features, improvements, and fixes to the litep2p library. Highlights include enhanced error handling, configurable connection limits, and a new API for managing public addresses. For a comprehensive breakdown, please see the full litep2p release notes. This update is also integrated into Substrate via PR #5609.
Public Addresses API
A new PublicAddresses API has been introduced, enabling developers to manage the node’s public addresses. This API allows for adding, removing, and retrieving public addresses shared with peers through the Identify protocol. It aims to address or reduce long-standing connectivity issues in litep2p.
Enhanced Error Handling
The DialFailure
event now includes a DialError
enum for more granular error reporting when a dial attempt fails. Additionally, a ListDialFailures
event has been added, which lists all dialed addresses and their corresponding errors in the case of multiple failures.
We’ve also focused on providing better error reporting for immediate dial failures and rejection reasons for request-response protocols. This marks a shift away from the general litep2p::error::Error
enum, improving overall error management. For more details, see PR #206 and PR #227.
Configurable Connection Limits
The Connection Limits feature now lets developers control the number of inbound and outbound connections, helping optimize resource management and performance.
Feature Flags for Optional Transports
With Feature Flags, developers can now selectively enable or disable transport protocols. By default, only TCP is enabled, with the following optional transports available:
- quic - Enables QUIC transport
- websocket - Enables WebSocket transport
- webrtc - Enables WebRTC transport
Configurable Keep-Alive Timeout
Developers can now configure the keep-alive timeout for connections, allowing more control over connection lifecycles. Example usage:
let litep2p_config = Config::default()
.with_keep_alive_timeout(Duration::from_secs(30));
Section 2. Performance Comparison
To gauge performance, we ran a side-by-side test with two Polkadot nodes — one using the litep2p backend and the other using libp2p — on the Kusama network. Both nodes were configured with the following CLI parameters: --chain kusama --pruning=1000 --in-peers 50 --out-peers 50 --sync=warp --detailed-log-output
.
While network fluctuations and peer dynamics introduce some variability, this experiment offers an approximation of how the two network backends perform in real-world scenarios.
CPU Usage
One of litep2p’s key advantages is its lower CPU consumption, using 0.203 CPU time compared to libp2p’s 0.568, making it 2.78 times more resource-efficient.
Network Throughput
Litep2p handled 761 GiB of inbound traffic, while libp2p processed 828 GiB, giving libp2p an 8% edge in this category. However, litep2p outperformed libp2p in outbound traffic, handling 76.9 GiB versus libp2p’s 71.5 GiB, providing litep2p a 7% advantage for outbound requests.
Sync Peers
The chart below shows the number of peers each node connected with for sync purposes. Litep2p maintained more stable sync connections, whereas libp2p exhibited periodic disconnection spikes, which took longer to recover from. This may be due to litep2p’s increased network discovery via Kademlia queries.
Request Responses
While both backends showed similar numbers of successful request responses, libp2p outperformed litep2p in this area.
Litep2p encountered more outbound request errors, primarily due to substreams being closed before executing the request.
Preliminary CPU-constrained parachain testing resulted in worse performance for litep2p, for more details see Issue #5035.
With recent improvements in error handling, we expect to address these issues in future releases.
Other Performance Metrics
-
Warp Sync Time
The warp sync process saw litep2p completing in 526 seconds, compared to libp2p’s 803 seconds, indicating a significant performance gain for litep2p. The warp sync time was measured using the sub-triage-logs tool and you can find more details in PR #5609.
-
Kademlia Query Performance
The Kademlia component facilitates network discoverability. In an experiment to benchmark network discoverability, litep2p located 500 peers (about 25% of the Kusama network) in 12-14 seconds, while libp2p completed the same task in 3-6 seconds.
The experiment still produces quite a lot of noise and we’ll have a closer look at this once we have a better benchmarking system. In the meanwhile, the subp2p-explorer tool was used for this experiment. The bench-cli tool can also spawn a local litep2p network to reproduce this experiment, providing additional opportunities for optimization.
A special thanks to Dmitry for his exceptional work on litep2p, @alexggh for testing litep2p from the parachain perspective, and @AndreiEres for his efforts in improving benchmarking systems to help drive further network optimizations