Polkadot Release Analysis v0.9.33

:warning: The following report is not an exhaustive list of changes included in the release, but a set of changes that we felt deserved to be highlighted due to their impact on the Polkadot ecosystem builders.

Please do not stop reading the release notes v0.9.33. This report is complementary to the changelogs, not a replacement.

Highlights are categorized into High Impact, Medium Impact, and Low Impact for ease of navigation.

Summary

This release is mostly adding some fixes and enhancements around some FRAME pallets and improving the developer user experience. Some low impact yet note-worthy PRs include:

The Polkadot Release Analysis team has found no PR’s in this release that are considered high impact towards builders on Polkadot.


Runtimes built on release v0.9.33

  • Rococo v9330
  • Westend v9330
  • Kusama v9330
  • Polkadot v9330


:warning: Medium Impact

Remove sp_tasks::spawn API and related code + host functions

PR https://github.com/paritytech/substrate/pull/12639

This PR is the part of #Issue 11227.

Why is this important?

At the moment, sp_tasks::spawn runtime API has many issues and is not safe to use. Neither Polkadot nor Cumulus nor any Polkadot host implementation is using it.

This PR removes sp_tasks library and all its related code completely. The PR has a more detailed explanation.

How this impacts the Polkadot builders?

You only need to update Cargo.lock, which should happen automatically, once you compile the code.

Make Multisig Pallet Bounded

PR https://github.com/paritytech/substrate/pull/12457

Polkadot Companion PR: PR #6172
Cumulus Companion PR: PR #1793

This PR is the part of #Issue 8629.

Why is this important?

As a part of this PR, the following changes have been made in MultiSig pallet

  • Vec is turned into a BoundedVec
  • without_storage_info has been removed from MultiSig pallet
  • Type of MaxSignatories config parameter has been changed from u16 to u32

How this impacts the Polkadot builders?

If you are using MultiSig pallet in your runtime, it is recommended to update the type of MaxSignatories from u16 to u32, although nothing will be broken if you don’t.

An example of the pallet_multisig pallet runtime configuration:

impl pallet_multisig::Config for Runtime {
	type RuntimeEvent = RuntimeEvent;
	type RuntimeCall = RuntimeCall;
	type Currency = Balances;
	type DepositBase = DepositBase;
	type DepositFactor = DepositFactor;
	type MaxSignatories = ConstU32<100>; // Update this from u16 to u32
	type WeightInfo = pallet_multisig::weights::SubstrateWeight<Runtime>;
}

Add CreateOrigin to Assets Pallet

PR https://github.com/paritytech/substrate/pull/12586

Cumulus Companion PR: PR #1808

This PR is the part of #Issue 6126.

Why is this important?

A new origin CreateOrigin has been introduced in Assets pallet’s Config trait. Only this origin will be allowed to create a new class of fungible assets.

fn create has also been altered accordingly to use this origin.

How this impacts the Polkadot builders?

If you are using Asset pallet in your runtime, you have to add this new config type, otherwise runtime compilation will fail because of missing type error.

How to use

impl pallet_assets::Config for Runtime {
	// Other Config types
	type CreateOrigin = AsEnsureOriginWithArg<EnsureSigned<AccountId>>;
	
}

Fix fungible unbalanced trait

PR https://github.com/paritytech/substrate/pull/12569

As the PR description is very clear and concise, we will share it as it is:

The fungibles::Inspect::balance() include free and reserved, but fungibles::Unbalanced::setbalance didn’t check reserved in fungibles::Unbalanced::decreasebalance(), fungibles::Unbalanced::decreasebalanceatmost(), fungibles::Unbalanced::increasebalanceatmost(), fungibles::Unbalanced::increasebalance().

e.g. Alice’s balance is 100(free: 50, reserved: 50), decreasebalance(20), the final balance is 120(free: 80, reserved: 50), but the expected balance should be 80(free: 30, reserved: 50)

This is solved by substracting the reserved amount of the new amount set in the fn set_balance (free = new_balance - reserved).


Fix a few migration issues with 2D weights

PR Fix a few migration issues with 2D weights by KiChjang · Pull Request #1755 · paritytech/cumulus · GitHub

This PR introduces 2D Weights as the default weight struct used in both, dmp-queue and xcmp-queue pallets. The fields using the new weights are:

  • xcmp_max_individual_weight for QueueConfigData struct at xcmp-queue module.
  • max_individual for ConfigData struct at dmp-queue module.

These two values above define the maximum amount of weight any individual message may consume. Messages above this weight go into the overweight queue and may only be serviced explicitly.

It also introduces a migration that translates ReservedXcmpWeightOverride and ReservedDmpWeightOverride values into the new Weight multidimensional struct. These two define the weight we reserve at the beginning of the block for processing, respectively, XCMP and DMP messages. This overrides the amount set in the Config trait.

Make ValidateUnsigned available on all chains for paras module.

PR Make ValidateUnsigned available on all chains for paras. by eskimor · Pull Request #6214 · paritytech/polkadot · GitHub

Why is this important?

There is paras module, which tracks the current status of all parachain and parathreads. Every parachain/parathread has to be registered in this module in order to be considered as live.

This pallet has a dispatchable function include_pvf_check_statement, which includes a statement for a PVF pre-checking vote. More info on PVF pre-checking can be found here.

Before this change, ValidateUnsigned trait was not enabled on all chains to validate these unsigned transactions.

How this impacts the Polkadot builders?

As a part of this PR, ValidateUnsigned trait has been enabled for paras module on all chains (Polkadot, Kusama, Rococo, and Westend) to make PVF pre-checking work.

If you are not familiar with what the Parachain Validation Function (PVF) is, there was a nice post about it recently. You can find it here.

fix: construct_runtime multiple features

PR https://github.com/paritytech/substrate/pull/12594

PR #11818 introduced support for #[cfg(feature = <...>)] attribute into the construct_runtime! macro. This enabled the ability to feature-gate in the runtime.

However, compilation issues arose when trying to add more than one feature into the construct_runtime! macro. Users were prompted with multiple definition errors when compiling. This PR fixes this and allows users to bring more than one feature into this macro without any problems.

How to use

# definition of features in Cargo file
[features]

# -- snip --
frame-feature-testing = []
frame-feature-testing-2 = []
// definition of feature-gated pallets in runtime file

#[cfg(feature = "frame-feature-testing")]
impl pallet1::Config for Runtime {
	type RuntimeOrigin = RuntimeOrigin;
}


#[cfg(feature = "frame-feature-testing-2")]
impl pallet2::Config for Runtime {
	type RuntimeOrigin = RuntimeOrigin;
}

// -- snip --
frame_support::construct_runtime!(
	pub enum Runtime where
		Block = Block,
		NodeBlock = Block,
		UncheckedExtrinsic = UncheckedExtrinsic
	{
        // -- snip --
            #[cfg(feature = "frame-feature-testing")]
            Example1: pallet1,
            #[cfg(feature = "frame-feature-testing-2")]
            Example2: pallet2,
        // -- snip --
        }

To enable the example features defined above, compile with the --features flag:

--features=frame-feature-testing,frame-feature-testing-2

and both pallets will be present in the resulting runtime!

:information_source: Low Impact

Add Dev Mode (#[pallet(dev_mode)])

PR https://github.com/paritytech/substrate/pull/12536

Why is this important?

This PR allows developers to implement pallets in Dev Mode without worrying about a number of constraints (e.g. weights) and it provides defaults for a number of things, and disables accidental production compilation of the pallet.

How this impacts the Polkadot builders?

If a pallet is marked with #[pallet(dev_mode)], it will set up default weight as 0 if not specified, and will automatically mark all storage items with #[pallet::unbounded], meaning you do not need to implement MaxEncodedLen.

How to use

Developers can enable dev_mode by marking the pallet with:

#[frame_support::pallet(dev_mode)]
pub mod pallet {

// ---- Your pallet codebase ----

}

For more details, please see issue #8965

pallet-sudo: add CheckOnlySudoAccount signed extension

PR https://github.com/paritytech/substrate/pull/12496

This PR adds a signed extension: CheckOnlySudoAccount which prevents any transaction from entering the pool, unless they are signed by the sudo account.

How to use

You can add this in the runtime:

/// The SignedExtension to the basic transaction logic.
pub type SignedExtra = (
    // --- Other Checks ----
    pallet_sudo::CheckOnlySudoAccount<Runtime>
);

Treat near-zero intercept values as zero when calculating weights

PR Treat near-zero intercept values as zero when calculating weights by koute · Pull Request #12573 · paritytech/substrate · GitHub

How this impacts the Polkadot builders?

Currently when we execute benchmarking to calculate weights, there could be a possiblity of generating a slightly negative, near-zero intercept value when running the linear regression. As part of this PR, these values will be treated as zero.

An example of this scenario could be found in this PR.

#[test]
	fn intercept_of_a_little_under_zero_is_rounded_up_to_zero() {
		// Analytically this should result in an intercept of 0, but
		// due to numerical imprecision this will generate an intercept
		// equal to roughly -0.0000000000000004440892098500626
		let data = vec![
			benchmark_result(vec![(BenchmarkParameter::n, 1)], 2, 0, 0, 0),
			benchmark_result(vec![(BenchmarkParameter::n, 2)], 4, 0, 0, 0),
			benchmark_result(vec![(BenchmarkParameter::n, 3)], 6, 0, 0, 0),
		];

		let extrinsic_time =
			Analysis::min_squares_iqr(&data, BenchmarkSelector::ExtrinsicTime).unwrap();
		assert_eq!(extrinsic_time.base, 0);
		assert_eq!(extrinsic_time.slopes, vec![2000]);
	}

Related discussion can be found here #12482 (comment).

Defensively calculates the minimum and maximum of two values

PR https://github.com/paritytech/substrate/pull/12554

Why is this important?

The default Rust min/max function compares and returns minimum and maximum of two numbers whether it is self or others.

For example:

assert_eq!(10, 10_u32.max(9_u32)); // This and..
assert_eq!(10, 9_u32.max(10_u32)); // ..this are the same

// Similarly

assert_eq!(9, 9_u32.min(10_u32)); // This and..
assert_eq!(9, 10_u32.min(9_u32)); // ..this are the same

In some cases, we want these operations to perform defensively. As a part of this PR, two traits DefensiveMin and DefensiveMax (with functions defensive_min, defensive_strict_min, defensive_max, defensive_strict_max) are introduced to execute min/max operations defensively.

How this impacts the Polkadot builders?

After using DefensiveMin and DefensiveMax, we can check whether self is smaller/larger (strictly) than others.

How to use

assert_eq!(10, 10_u32.defensive_max(9_u32)); // This will work
assert_eq!(10, 9_u32.defensive_max(10_u32)); // This will panic


assert_eq!(9, 9_u32.defensive_min(10_u32)); // This will work
assert_eq!(9, 10_u32.defensive_min(9_u32)); // This will panic

For more details, please click here Issue #12550

Defensively truncate an input if the TryFrom conversion fails.

PR Add `DefensiveTruncateFrom` by ggwpez · Pull Request #12515 · paritytech/substrate · GitHub

Why is this important?

As part of this PR, a trait DefensiveTruncateFrom with function defensive_truncate_from has been introduced, which first tries non-truncating conversion using TryFrom and falls back to truncating if that fails.

How to use

let unbound = vec![1u32, 2, 3, 4];
let bound = BoundedVec::<u32, ConstU32<4>>::defensive_truncate_from(unbound.clone());
assert_eq!(bound, unbound);

Check if approval voting db is empty on startup

PR https://github.com/paritytech/polkadot/pull/6219

Why is this important?

Currently, it is very challenging to debug parachain consensus related finality issues at startup.

A parachains DB sanity check has been added at node startup. This will help to debug whether the parachains DB is empty at startup or not. If yes, this check will emit a proper info! log.

For more details, please click here Issue #6177

use associated iterator types for InspectEnumerable

PR https://github.com/paritytech/substrate/pull/12389

Why is this important?

There is an interface InspectEnumerable for enumerating items over a collection or many collections of NFTs. At the moment, it is forcing to use Box<dyn Iterator> for the method return types.
This PR changes the return types of the InspectEnumerable trait’s methods to an associated iterator type. While implementing this trait, if the user doesn’t want to name the type, he/she can always define boxed iterators.

For more details, please click here Issue #12231

New Weights for All Pallets

PR https://github.com/paritytech/substrate/pull/12325

Based on the fix added in the previous release: Improve base weights consistency and make sure they’re never zero, all the weights were recalculated for all the FRAME pallets.

It is recommended that after this fix arrives on the chain, all weights are recalculated.

Add map and try_map methods to BoundedBTreeMap

PR https://github.com/paritytech/substrate/pull/12581

This PR implements map and try_map for BoundedBTreeMap struct. These two new functions allow applying a certain function f to each value of the map. The difference between these two is that try_map allows short circuiting in case an error is found.
None of these two new operations affect the lenght of the map.

let b1: BoundedBTreeMap<u8, u8, ConstU32<7>> = [1, 2, 3, 4].into_iter().map(|k| (k, k)).try_collect().unwrap();
let b2: BoundedBTreeMap<u8, u16, ConstU32<7>> = [1, 2, 3, 4].into_iter().map(|k| (k, (k as u16) * 100)).try_collect().unwrap();

assert_eq!(Ok(b2), b1.try_map(|(_, v)| (v as u16).checked_mul(100_u16).ok_or("overflow")));

Update Polkadot inflation to take into account auctions

PR: Update polkadot inflation to take into account auctions by kianenigma · Pull Request #5872 · paritytech/polkadot · GitHub

Moving away from a more static definition of inflation for Polkadot by taking into account the number of slots that have been auctioned and that have active parachains:

let auctioned_slots = Paras::parachains()
			.into_iter()
			// all active para-ids that do not belong to a system or common good chain is the number
			// of parachains that we should take into account for inflation.
			.filter(|i| *i >= LOWEST_PUBLIC_ID)
			.count() as u64;
// --snip

// 30% reserved for up to 60 slots.
let auction_proportion = Perquintill::from_rational(auctioned_slots.min(60), 200u64);

And it does affect the ideal_stake in the following way:

// Therefore the ideal amount at stake (as a percentage of total issuance) is 75% less the
// amount that we expect to be taken up with auctions.
let ideal_stake = Perquintill::from_percent(75).saturating_sub(auction_proportion);

Increase max rewardable nominators

PR: https://github.com/paritytech/polkadot/pull/6230

Following the rest of improvements staking is receiving across the board, this PR doubles the number of nominators rewarded per validator:

- pub const MaxNominatorRewardedPerValidator: u32 = 256;
+ pub const MaxNominatorRewardedPerValidator: u32 = 512;

This number should help to reduce the number of oversubscribed validators.

Your friendly neighborhood Polkadot Release Analysis team,
@hectorb @alejandro @bruno @ayush.mishra

Help us improve the release analysis by filling out this 6 question survey.

9 Likes

:warning: Important update about Polkadot and Kusama Runtime upgrades:

As it is mentioned on the release notes for v0.9.34 Release:

Polkadot runtimes v9320 and v9330 are replaced by v9340.
Kusama runtime v9330 is replaced by v9340.

In case you missed it, this is the report for the previous release (0.9.31 + 0.9.32):

2 Likes