Polkadot Release Analysis v0.9.37

Polkadot Release Analysis v0.9.37

:warning: The following report is not an exhaustive list of changes included in the release, but a set of changes that we felt deserved to be highlighted due to their impact on the Polkadot ecosystem builders.

Please do not stop reading the release notes v0.9.37. This report is complementary to the changelogs, not a replacement.

Highlights are categorized into High Impact, Medium Impact, and Low Impact for ease of navigation.

Help us improve the release analysis by completing this 6 question survey.

Summary

The first release of the year 2023 for Polkadot has been rolled out with a numerous new features and enhancements. One of the most significant additions is the new NFT pallet 2.0, which has been added to the FRAME code base. This pallet includes several new features that will allow developers to extend their use cases around NFTs, making it a major improvement in the overall functionality of their platform.

Another important fix that has been implemented in this release is the resolution of the missing block number issue on forced canonicalization, which was a problem being experienced on the node side.

In addition to these major updates, the release also includes a new StorageStreamIter iterator. This new feature will be very efficient from a memory perspective and will improve the overall performance. Additionally, a set of new ensure-ops family methods have been added, providing developers with some very useful methods for safe arithmetic operations. Also the chainHead RPC API was added to the new spec allowing more functionalities on this new specification.

As with any release, there have also been several additional enhancements made to the FRAME pallets.


Runtimes built on release v0.9.37

  • Rococo v9370
  • Westend v9370
  • Kusama v9370
  • Polkadot v9370


:exclamation: High Impact

NFTs 2.0

PR: https://github.com/paritytech/substrate/pull/12765

This new pallet can be considered as the next generation of the well known pallet_uniques. Some of the new features for this pallet are:

  • Atomic swaps:

It is now possible to swap NFTs without requiring a third party. For performing the swap first the offered and desired NFTs should be eligible to be swapped by calling the extrinsic create_swap() (in this step the minimum swap price can be defined). Once executed, the swap can be confirmed by calling the claim_swap() extrinsic or cancel_swap() in case the swap will not happen.

  • New approvals structure

Permits assignment of more than one approver for transfers and certain actions. BlockNumber can be also set to set the duration of this “approval delegation”.

  • Auto-incremental collection ids

New collection Ids are now incremented by one automatically. However as simple as this looks there is a complexity around this set up: For XCM v3 CollectionId can be configured as XCM Multilocations. For this reason the Incrementable trait needs to be implemented for setting the right incrementer function.

  • Usage of bitflags for roles and collection/item settings

    • Pallet-level feature flags can be disabled from the pallet configuration in the runtime.
    • For collections and items, some fields like is_frozen are now set into a new ItemConfigOf/CollectionConfigOf storage. This enables extension of collections/items functionality without the need to migrate them.
    • Separated metadata from attributes locking. Attributes can be changed even if the metadata is locked.
  • Smart attributes

The main idea of that feature is to allow an owner of an NFT to allow other entities (e.g. another account, an application, a parachain, a custom origin, etc.) to update certain attributes about the NFT.

On a technical level, all attributes have been prefixed with the namespaces to achieve this:

- Pallet(PalletId) - which means the attribute was set and could be changed by the pallet only;
- CollectionOwner - means the attribute can be set and modified by the collection's owner only;
- ItemOwner - a namespace for the NFT owners. This is an ideal place to store some custom values like dapps preferences, DIDs, NFT's custom title, etc.;
- Account(AccountId) - attributes within that namespace could be modified only if the origin was given permission to do so.

With different minting options we can now have collections where anyone can mint an NFT (free of charge or for some price), or one can set that only NFT holders of another collection can mint new items.

Please refer to the set_attribute() about how to set the namespace accordingly:

		/// - `collection`: The identifier of the collection whose item's metadata to set.
		/// - `maybe_item`: The identifier of the item whose metadata to set.
		/// - `namespace`: Attribute's namespace.
		/// - `key`: The key of the attribute.
		/// - `value`: The value to which to set the attribute.
		///
		/// Emits `AttributeSet`.
		///
		/// Weight: `O(1)`
		#[pallet::call_index(19)]
		#[pallet::weight(T::WeightInfo::set_attribute())]
		pub fn set_attribute(
			origin: OriginFor<T>,
			collection: T::CollectionId,
			maybe_item: Option<T::ItemId>,
			namespace: AttributeNamespace<T::AccountId>,
			key: BoundedVec<u8, T::KeyLimit>,
			value: BoundedVec<u8, T::ValueLimit>,
		) 

New methods have been added to control whether an external account can modify attributes within a namespace: approve_item_attributes and cancel_item_attributes_approval.

Related Issues: Uniques V2 Roadmap 2022 (Tracking Issue) · Issue #11783 · paritytech/substrate · GitHub


:warning: Medium Impact

Fix missing block number issue on forced canonicalization

PR: https://github.com/paritytech/substrate/pull/12949

Why is this change interesting for builders?

A canonical blockchain is referred as the main chain, which contains blocks and continues to grow in time. To understand more about how the block finalization process works and determines the canonical chain, please read here.

There is a fn force_delayed_canonicalize, which performs forced canonicalization. At the moment, there is an issue of missing block numbers on forced canonicalization. This PR fixes this issue.

Related issue: lock import error: Backend error: Can't canonicalize missing block number #2482889 when importing {BLOCK_HASH} (#2486985) · Issue #12613 · paritytech/substrate · GitHub


frame_support::storage: Add StorageStreamIter

PR: https://github.com/paritytech/substrate/pull/12721

Why is this change interesting for builders?

A new Streaming iterator trait StorageStreamIter has been introduced for SCALE container types (simple concatenated aggregate little-endian). Stream iterators are used to access an existing input or output stream using iterator operations. As explained in the PR description, this will be very efficient from a memory perspective. At the moment it is implemented for StorageValue only.

How to use

pub type Something = StorageValue<_, Vec<u32>>;

let data: Vec<u32> = vec![1, 2, 3, 4, 5];
Something::put(data);

assert_eq!(data, Something::stream_iter().collect::<Vec<_>>());

Add ensure-ops family methods

PR #1: Add ensure-ops family methods by lemunozm · Pull Request #12967 · paritytech/substrate · GitHub
PR #2: https://github.com/paritytech/substrate/pull/13042

This pull request adds some very useful methods for safe arithmetic operations. The interesting aspect is that these methods are exposed through traits that can be used on the pallet Config types. An exhaustive list of operations can be found in the PR description.

Code Snippet (over EnsureAdd)


	/// Performs addition that returns [`ArithmeticError`] instead of wrapping around on overflow.
	pub trait EnsureAdd: CheckedAdd + PartialOrd + Zero + Copy {
		/// assert_eq!(underflow(), Err(ArithmeticError::Underflow));
		/// ```
		fn ensure_add(self, v: Self) -> Result<Self, ArithmeticError> {
			self.checked_add(&v).ok_or_else(|| error::equivalent(v))
		}
	}

How to use

#[pallet::config]
pub trait Config<I: 'static = ()>: frame_system::Config {
    type IncrementalValue =  ...Parameter + AtLeast32BitUnsigned + EnsureAdd;

...

#[pallet::call_index(6)]
#[pallet::weight(T::WeightInfo::my_call())]
pub fn my_call(
    origin: OriginFor<T>,
	value: T::IncrementalValue,
...
            
let new_value = value.ensure_add(100_u32)?;
...

Related Issue: Add ensure_ops methods (as checked_ops but returning ArithmeticError instead) · Issue #12754 · paritytech/substrate · GitHub


Improve inactive fund tracking

PR: https://github.com/paritytech/substrate/pull/13009

Why is this change interesting for builders?

Storage versions declared in this pallet can be understood as the following:

  • 0: “not tracking InactiveIssuance”
  • 1: “tracking InactiveIssuance”

Using these storage version declarations, this PR includes some migration logic and makes MigrateToTrackInactive (v0 → v1)
be re-executed, and subsequently migrates to v1. Effectively, the migrations will operate as follows:
v1 -> v0 -> v1, however, this time a correct tracking for the inactive issuance will be added.
There are no new alterations to storage formats or state transition semantics.


Kusama origins as xcm multi_location

PR: Kusama origins as xcm multi_location by muharem · Pull Request #6273 · paritytech/polkadot · GitHub

Why is this change interesting for builders?

Drawing from the OpenGov concept of operating with a more granular set of origins, this PR ensures that the granularity is not lost for inter-chain communications.

To make this happen the changes included map Origins into Plurality body.
Adding the following type to Kusama xcm config:

/// Type to convert a pallet `Origin` type value into a `MultiLocation` value which represents an interior location
/// of this chain for a destination chain.
pub type LocalPalletOriginToLocation = (
	// We allow an origin from the Collective pallet to be used in XCM as a corresponding Plurality of the
	// `Unit` body.
	CouncilToPlurality,
	// StakingAdmin origin to be used in XCM as a corresponding Plurality `MultiLocation` value.
	StakingAdminToPlurality,
	// Fellows origin to be used in XCM as a corresponding Plurality `MultiLocation` value.
	FellowsToPlurality,
);

Along with OriginToPluralityVoice struct and implementation into xcm-builder.

If you are curious about the exact implementation, please check the relevant details in the above linked PR.

It also adds more identifiers for pluralistic bodies into the BodyId enum defined within the available junctions.

BodyId origin/track needed on Polkadot to voice this plurality.
Defense staking_admin
Administration general_admin
Treasury trasurer

Cumulus companion: Companion: Accept Kusama StakingAdmin origin by muharem · Pull Request #1865 · paritytech/cumulus · GitHub


rpc: Implement chainHead RPC API

PR: https://github.com/paritytech/substrate/pull/12544

Why is this change interesting for builders?

As part new RPC API spec (#12071), this PR implements the following chainHead methods:

  • chainHead_unstable_follow: Subscription for new, new best and finalized blocks.
  • chainHead_unstable_body: Get the extrinsics of a reported block by the follow method
  • chainHead_unstable_call: Runtime API call at a reported block
  • chainHead_unstable_header: Get the header of a reported block
  • chainHead_unstable_storage: Query storage for a reported block
  • chainHead_unstable_unpin: Unpin a reported block

:information_source: Low Impact

Rename SlashCancelOrigin to AdminOrigin

PR: Allow StakingAdmin to set `min_commission` by Ank4n · Pull Request #13018 · paritytech/substrate · GitHub

Why is this change interesting for builders?

Presently, all extrinsics of the Staking pallet are either signed by origin using ensure_signed or by the root user using ensure_root except one extrinsic cancel_deferred_slash, which has a custom origin of SlashCancelOrigin.

Root origin has the highest privilege and is considered the superuser of the runtime. Not all root level Staking pallet extrinsics need to be called by root. Some can be called by custom origin like cancel_defered_slash.

As a part of this PR, this origin has been renamed to AdminOrigin so that this origin can also be used for other appropriate extrinsics rather than creating a separate origin for a specific extrinsic.

This PR also introduces to a new extrinsic set_min_commission, which is also using AdminOrigin.

How this impacts Polkadot builders?

If you are using Staking pallet in your runtime, you have to rename the config type SlashCancelOrigin to AdminOrigin, otherwise runtime compilation will fail because of a missing type error.

How to use

impl pallet_staking::Config for Runtime {
	// Other Config types
-	type SlashCancelOrigin = frame_system::EnsureRoot<Self::AccountId>;
+	type AdminOrigin = frame_system::EnsureRoot<Self::AccountId>;
	
}

Does this change have a related companion PR?

Polkadot companion PR: #6444

Related issue: Revise staking origins · Issue #12930 · paritytech/substrate · GitHub


Make CLI state pruning optional again

PR: Make CLI state pruning optional again by bkchr · Pull Request #13017 · paritytech/substrate · GitHub

Why is this change interesting for builders?

If we want our node to maintain a full state of the blockchain, we can use the --pruning flag:

./target/release/template --name "My node's name" --pruning archive

An archive node will keep records of all blocks but requires a significant amount of disk space. You can change the mode depending on the requirement. The state_pruning CLI was non-optional rather than optional and had a 256 value as the default value to maintain the state of the past 256 blocks. An error would be given when matching state pruning was not given by CLI.

This PR has fixed this issue and made state_pruning CLI optional now.


Support custom genesis block

PR: https://github.com/paritytech/substrate/pull/12291

Why is this change interesting for builders?

In the genesis configuration we set initial values. At the moment, we can not customize header of genesis block because genesis build picks the values from genesis store.

This PR provides more flexibility and control of building genesis block.

Related issue: Callbacks for pallet-assets · Issue #12279 · paritytech/substrate · GitHub


Add CallbackHandle to pallet-assets

PR: https://github.com/paritytech/substrate/pull/12307

Why is this change interesting for builders?

There might be scenarios when you want to perform some operations after creating/destroying assets using the Assets pallet. Prior to this PR, one had to do this manually.

This PR provides an option to register a callback after asset creation or deletion, where you can implement necessary actions, which will be triggered automatically once an asset has been created or destroyed. The original discussion can be found here.

How this impacts Polkadot builders?

As a part of this PR, a new trait AssetsCallback and new config type CallbackHandle has been added in the Assets pallet. If you are using the Asset pallet in your runtime, you have to add this new config type, otherwise runtime compilation will fail because of a missing type error.

How to use

impl pallet_assets::Config for Runtime {
	// Other Config types
	type CallbackHandle = (); // If you do not want to register callback
	
}

Does this change have a related companion PR?

Cumulus companion PR: #1947

Related issue: Callbacks for pallet-assets · Issue #12279 · paritytech/substrate · GitHub


Scheduler: remove empty agenda on cancel

PR: Scheduler: remove empty agenda on cancel by muharem · Pull Request #12989 · paritytech/substrate · GitHub

Why is this change interesting for builders?

Scheduler pallet is used to schedule tasks to execute either at a specified block number or at a period. Inside the pallet, we store these tasks in the agenda, which are indexed by a block number.

At the moment, when we remove items from agenda, it gets stored as None rather than being removed completely. As a result, agenda will start having trailing None items like [None, None, …]. An agenda with all None items only gets removed, when they are serviced.

As a part of this PR, a small change has been made in current scheduler pallet design. A fn cleanup_agenda has been introduced to remove trailing None items from an agenda. If all items are None, entire agenda will be removed.

Related issue:


babe: allow skipping over empty epochs

PR: https://github.com/paritytech/substrate/pull/11727

Why is this change interesting for builders?

The model BABE assumes that at least one block will be produced per epoch. Until now not complying with this requirement would result in block production halting and an error being displayed "unexpected epoch change".

This PR enables skipping over empty epochs, which makes block production possible even if there are no blocks for multiple epochs.

This is particularly useful for those running test/dev-nets where this scenario can occur.

Your friendly neighbourhood Polkadot Release Analysis team,
@hectorb @alejandro @ayush.mishra @bruno

15 Likes

In case you missed the report from the previous release (v0.9.36)