The Roadmap for the Dynamic Allocation Pool (DAP)

Introduction

This post outlines a potential roadmap for implementing the Dynamic Allocation Pool (DAP) as an issuance buffer, together with the complementary changes required in the staking system. Given the scope of this initiative, we aim to structure the work into smaller, self-contained, and well-defined phases. The purpose of this post is to provide the community with a view of the broad picture as well as already presenting concrete details for the first phase, for which a dedicated WFC will be published shortly.

Note that the details presented here, particularly those beyond Phase 1, remain subject to change as the design continues to evolve.

A short recap

It is advisable to read the previous post on the Dynamic Allocation Pool (DAP) for more detailed information.

The overarching objective of this initiative is to prepare the Polkadot protocol for the upcoming reduction in issuance and the resulting declining issuance curve. In addition, all proposed changes are designed to lay the groundwork for a future integration of Proof-of-Personhood at the protocol level.

The DAP

The Dynamic Allocation Pool (DAP) is an issuance buffer that soaks up all newly minted DOT as well as protocol revenue (fees, coretime revenue, slashes). This mechanism will also allow to separate the budgets for the different payout destinations and make them dynamically adjustable.

The DAP will be configured as multi-asset account (or sets of accounts) that allows to specify outflows both in DOT and other assets. Most notably, we expect the DOT-native stablecoin to be integrated tightly into the process. While being treated as a distinct mechanism from the DAP, the plan is to allow the DAP to use the stablecoin protocol to issue stablecoin- instead of DOT payments. The outflows of the DAP can be lower than the inflows, effectively creating a strategic reserve of savings, that can be used for various purposes in the future. For example, the liquid funds could be used to secure the minted pUSD vault or allow to defer consumption and increase outflows beyond inflows in the future.

The following graph illustrates the design:

Note: In contrast to the original post, this graph removes the explicit notion of a strategic reserve and uses the DAP as savings account.

Changes to the staking system

By introducing the DAP as an issuance buffer, we gain the ability to steer outflows more dynamically. This effectively decouples the budget for nominator rewards from validator compensation, allowing interest payments to stakers to be separated from the security and operational costs paid to validators.


Roadmap

The first milestone is naturally defined by the enactment of WFC-1710 on 14 March 2026, which substantially reduces Polkadot’s issuance. Implementing the full set of adjustments outlined in the previous post is not feasible within the limited time remaining. We can, however, deliver a set of important complementary updates aligned with the enactment of WFC-1710. These updates collectively define Phase 1 of the roadmap.

Phase 1 Setup (at or before 14th March 2026)

Note: The content of Phase 1 will be part of the upcoming WFC.

  • General
    • Implement a basic version of the DAP pallet with a permanent account that can hold DOT.
    • Stop the burning of all DOT in the system.
      • DOT from transaction fees (on the Relay Chain and all system chains) and from coretime sales will be collected in their respective system parachains and, in a later phase, transferred to the DAP main account.
      • Treasury burns will be stopped, and the corresponding DOT will instead remain in the Treasury.
    • Redirect DOT acquired from slashes to the DAP.
  • Validators
    • Minimum self-stake: 10’000 DOT
    • Minimum commission: 10%
    • Staking operator proxies: allows to separate the stash account from staking operations (more details to follow).
  • Nominators
    • Nominator’s stake is exempt from slashes.
    • Drastic reduction of unbonding time. Nominator’s stake can unbond as soon as they are not backing any active validators anymore. Depending on when the unbonding extrinsic was cast, this takes between a minimum of 24 and a maximum of 48 hours.

Further changes, such as separating nominator and validator budgets, or enabling payments in vested DOT and stablecoins, require broader technical modifications, along with extensive testing and auditing, and therefore fall outside the scope of Phase 1. As an interim measure, the minimum self-stake requirement is complemented by a temporary minimum commission. The rationale for this approach is outlined here. With the revised validator incentive scheme planned for Phase 2, we expect commission to be removed entirely.


Beyond Phase 1

Note: The next sections describe the roadmap beyond Phase 1 and will be complemented with their own WFC which will enshrine the specifics (such as parameters). Although there is, in contrast to Phase 1, not a natural milestone and therefore a lack of a specific deadline, the implementation towards a fully functional DAP has priority.

Phase 2

  • General
    • Issuance is directed into the DAP.
    • DAP becomes multi-asset and can, for example, acquire, hold, and distribute stablecoins.
    • Outflows become individually configurable governed by its dedicated OpenGov track(s).
  • Separation of outflow streams for Validators and Nominators.
    • Validator will receive two types of payments. One that is fixed based on stablecoins (potentially Hollar or other stablecoins until pUSD is launched). And a second in vested DOT to align validators with the long-term success of Polkadot and as incentive to accumulate self-stake. The latter payment is calculated through a new reward curve with diminishing rates of return. For more details, see here.
    • Nominators will be rewarded from the DAP based on the configuration of the algorithm.
  • Integration of automatic and periodic payments (likely in stablecoins) for collators of Polkadot’s system chains.
  • (Potentially) An extension to the staking operator proxies that allows for trustless deposits of self-stake between a validator and another party.
  • Dynamic inflow to Treasury.
    • The Treasury should receive periodic income from the DAP to fund expenses. While newly minted DOT are released to the DAP on a daily schedule, it may be preferable to replace daily transfers to the Treasury with a one-off payment covering anticipated expenses for the next, for example, 6–12 months. The size of such a transfer would be justified through a dedicated proposal that clearly outlines and budgets upcoming expenditures, and would be subject to an OpenGov referendum on its own dedicated track. Under this setup, excess funds remain in the DAP, while the Treasury is positioned as an active funding mechanism for ecosystem initiatives operating on a budget-based model. This encourages more rigorous discussion among community members about the protocol’s needs and reinforces the narrative of DOT as a scarce resource that requires sufficient justification to be spent. Note that this does not imply that specific proposals must be submitted in advance. The intention is not to pre-commit to concrete expenditures, but to agree on high-level budgets (for example, “marketing budgets”, “ecosystem incentive budgets”, or “fellowship salary budgets”). As we expect the Treasury to hold significantly more funds than required for the initial spending period, it may be sensible to transfer excess funds to the DAP once it is fully functional. Front-loading liquidity for the DAP in this way also has the benefit of reducing its reliance on the daily inflow schedule defined by the issuance curve.

Next Steps

As a next step, we’ll post a Wish-for-Change referendum to obtain legitimacy on the changes proposed for Phase 1.

Additional Resources

16 Likes

The WFC has been submitted here: https://polkadot.subsquare.io/referenda/1827

3 Likes

Thanks for the outline, @jonas. This looks like a feasible path toward the DAP goals that I support.

As a validator from the soon-to-be former DN program, I have questions regarding two parameters: minimum self-stake and minimum commission.

Historically, self-stake requirements in the 1KV and DN programs have inversely correlated with DOT price. The proposed 10,000 DOT self-stake (mirroring the cancelled Cohort 4) and the 10% commission seem designed to ensure validator profitability at current prices while ensuring skin-in-the-game.

While these figures seem reasonable now, could you make the reasoning behind them more explicit? It would be helpful to know if there is a framework for adjusting these parameters as market conditions change, as this would help us anticipate future updates.

(I understand that this might only be transitory to make sense in the current environment, until validator compensation is moved to the new model).

Hi @deigenvektor.io

thanks for your comment. You can find more information about the rationale on these two parameters here. Indeed, both serve the function to ensure proper skin-in-the-game of the validators that makes up their resilience. Additionally, the 10% commission ensures profitability (which is a necessary condition of resilience). Interestingly, the ELVES protocol denominates the value-at-stake for validators in DOT, so these numbers automatically scale with market conditions. Having said that, Phase 2 will likely restructure the validator incentives and make it much more flexible to be adjusted if the need arises. This will be done again through OpenGov.

2 Likes

This is the critical analysis shared, treated as spam, reshared and now I want to insist in what I see as strong rationales. Basically, the DAP model conflates “allocation” with “wealth” in a core way that breaks the idea of a Constant Relative Risk Aversion which would sound compelling for staker’s power to decide how much to spend of future allocation but is not anchored on wealth.

This is basically an economic trap because allocation IS NOT wealth unless under speculation which brings the economic security discussion into price analysis of possible outcomes, which is what I try to correct with the Bellman equation.

Stakers need to understand how risky is to push further into centralized governance of a competitive asset, the token, in this case to capture the expenditure of hypothetical future value of a token not yet issued. The main idea of reducing issuance is to reduce cost of the competitive asset in its real world usage, outcompeting the alternatives. This proposal adds costs in the form of risk for its token holders, which I’ve argued it touches the fine line of anti-trust laws for compliance, rising the risk of this centralization maneuver.

On the Role of Purchasing Power and the CRRA Assumption in the DAP Model

Once again, the proposed economic framework appears detached from purchasing power—a foundational component in any tokenomics model. I’d like to re-emphasize this point, as previously discussed in Polkadot’s Economic Resilience and the Role of Inflation.


1. Purchasing Power: The Missing Anchor

The DAP model is built upon deterministic issuance, consumption smoothing, and Constant Relative Risk Aversion (CRRA) assumptions. But it never grounds these flows in the real value of DOT, i.e., its purchasing power.

What constitutes wealth here? The proposal blurs lines without probabilistic weighting. Consider this breakdown:

Wealth Type Description Relation to Purchasing Power DAP Handling
Coretime/Transaction Revenue Protocol income from blockspace and fees Scales with adoption; volatile in fiat terms Inflows to DAP, but not dynamically modeled
Minted DOT Supply Seigniorage from issuance (e.g., 55M DOT/year post-2026) Dilutive if unbacked; erodes value in bear markets Primary funding, smoothed via reserves
Productive Value Security/blockspace utility Emergent from network use; ties to validator ROI Implicit in incentives, but static ( r = 0 ) assumption

Treating “allocation” as “wealth” (with 1:1 equivalence over time) implicitly assumes stable DOT value—yet that assumption itself needs modeling, especially with exogenous shocks (e.g., 50% price drops halving attack costs). Without anchoring to purchasing power, CRRA loses validity for real resilience.

To stress-test: Assign rough probabilities to scenarios—what’s the expected utility under a 30% crash likelihood?

Scenario (Prob.) Purchasing Power Δ Reserve Erosion Risk Mitigation?
Bull (40%) +20% Low Revenue boost
Stagnant (40%) 0% Medium Baseline smoothing
Crash (20%) -50% High Dynamic triggers needed

2. CRRA and Log Utility: Is It Justified?

Under CRRA, the model assumes:

Screenshot 2025-11-10 at 10.30.13 PM

with optimal consumption declining ~9% annually. Yet CRRA presumes a behavioral agent optimizing real utility over time. In this case, there is no “agent”—only a deterministic protocol distributing funds to validators, Treasury, and reserves.

Thus, the log form becomes a formal convenience for tractable solutions—not an economic truth grounded in network behaviors like validator exits or capital flight.

If the goal is resilience or utility, why not consider a convex utility aggregator over dimensions like {security, liquidity, adoption}, derived from actual system metrics? A Bellman equation approach—

Screenshot 2025-11-10 at 10.30.57 PM

—with state variables like staking rate and DOT price could better capture feedback loops over static math.


3. The Discount Factor β ∈ (0,1)

The introduction of a discount factor (e.g., β = 0.91) creates a manual time lever in protocol design, balancing early ecosystem bets with long-term reserves.

In macroeconomics, β reflects human impatience. In Polkadot, it becomes a governance dial—tunable to shift funds across eras, potentially via OpenGov tracks or advisory committees.

This raises centralization risk. Why not derive β from on-chain observables—like staking velocity, validator cost deviation, or adoption metrics? Endogenizing it via dynamic programming would align with actual network signals, avoiding arbitrary tuning.


4. Governance and Allocation Efficiency

If decisions around the incentive shape, β, or stablecoin collateralization remain in the hands of a small advisory committee, how can stakers verify that allocations optimize network utility?

This setup invites governance capture unless it’s anchored in measurable, on-chain metricssuch as:

  • Validator cost coverage ratios (e.g., vs. $2k baseline)
  • Treasury execution efficiency
  • DOT purchasing power over time (e.g., fiat-adjusted reserves)

Without transparent dashboards or oracle-derived indicators, efficiency remains speculative. Proposal: Embed KPI oracles for real-time audits?


5. Stablecoin Linkage: Closing the Loop

If DOT-backed stablecoins will be minted by DAP flows (e.g., overcollateralized at 150% for validator and Treasury fiat needs), purchasing power becomes endogenous to protocol solvency.

Key open questions:

  • Who ensures reserve adequacy amid volatility?
  • How are liquidation risks modeled (e.g., collateral buffers, backstops from strategic reserves)?
  • Where is the feedback loop between inflation, volatility, and real obligations?

Leaving the stablecoin outside the optimization loop creates a disconnect between issuance and real-world economic costs—precisely what purchasing power analysis resolves.


:red_question_mark: Key Questions

  1. What justifies treating allocation as “wealth” for applying CRRA (especially with ( r = 0 ))?
  2. Why log utility? Is there a principled rationale, or is it for mathematical simplicity?
  3. Can time preference (β) emerge endogenously from on-chain behavior instead of being fixed?
  4. How can stakers evaluate the efficiency of economic choices made by an unelected committee?
  5. Isn’t purchasing power a required variable in the model’s objective function?
  6. If backed by a stablecoin, how will liquidity and resilience be maintained—e.g., via overcollateralization, reserve buffers, or automated rebalancing?

The DAP is a mathematically elegant construct, but its optimization remains nominalunless grounded in real value. Without integrating purchasing power and liquidity feedbacks, the model risks optimizing abstractions rather than building true economic resilience.

Let’s iterate—@GehrleinJonas, any thoughts on Bellman simulations or Kusama testbeds to further stress-test these dynamics? Community, what’s your take on endogenous β?

Cheers,
Diego (@DiegoTristain)

Diego raises a point that deserves more direct engagement from DAP proponents: the model optimizes nominal token flows, but network resilience ultimately depends on real purchasing power.

To put it concretely: if DOT drops 50%, the protocol still issues the same number of tokens, but validators still have fiat-denominated costs (hardware, bandwidth, labor). The “smoothed consumption” the model achieves exists only in DOT-denominated terms—it doesn’t smooth the actual economic capacity of the network to pay for security.

That said, I’d push back slightly on the proposed remedies. Embedding DOT price as a state variable in a Bellman framework sounds theoretically sound, but it creates oracle dependencies and potential feedback loops that could introduce new instabilities. If issuance responds to price, you risk pro-cyclical dynamics where falling prices trigger reduced issuance, which could further depress staking yields, which could accelerate sell pressure.

A middle path worth exploring: rather than making the core model price-aware, could the reserve mechanism include explicit drawdown triggers based on purchasing-power thresholds? Something like “if validator cost coverage falls below X% of baseline, release Y from strategic reserves”—rules-based, transparent, but anchored in real costs rather than nominal allocations.

The cost baseline itself could be set via governance vote on a quarterly cadence rather than pulled from a price oracle. Validator operating costs (hardware, electricity, bandwidth) move slowly compared to token prices, so a low-frequency governance update is sufficient. This avoids the real-time oracle dependency while still grounding the model in economic reality. And validators themselves become a natural check—if governance sets costs too low, they’ll push back (or exit); if someone tries to inflate costs artificially, token holders resist. Adversarial balance rather than a single point of failure.

It’s also worth noting that this isn’t the only approach being discussed in the ecosystem. The burn-based tokenomics RFC for Kusama takes a different tack entirely—rather than optimizing allocation via complex modeling, it proposes simple burn mechanisms that create demand-responsive scarcity automatically. High usage leads to more burns, low usage doesn’t. The market determines scarcity rather than a preset formula. Whether or not you prefer that model, it highlights that there are simpler alternatives that achieve purchasing-power feedback without the CRRA assumptions Diego is questioning.

Curious whether any simulations have stress-tested the DAP under historical volatility scenarios (e.g., the 2022 drawdown).

1 Like

Just one angle of the argument I am trying to expose here (I see many):

By hard-coding a 10% minimum commission, the protocol effectively acts as a cartel manager, automating the price-fixing that antitrust laws usually try to prevent.

Antitrust regulators (like the DOJ or EU Commission) often use the “Hub-and-Spoke” theory to prosecute algorithmic price-fixing.

• The Hub (The Protocol/DAP): The central entity or algorithm that sets the terms. In this case, the updated Polkadot runtime code.

• The Spokes (The Validators): The competitors who, instead of competing to offer the lowest price, essentially “agree” to the Hub’s terms by running the software.

• The Rim (The Agreement): The fact that validators are aware that all other validators are bound by the same 10% floor eliminates the risk of being undercut.

Example case:

In United States vs. Apple (e-books case), Apple was the “hub” that coordinated publishers to raise prices. Here, the DAP code plays the role of Apple, coordinating validators to maintain a 10% commission floor.

In the e-books case, Apple was found guilty of orchestrating this illegal price-fixing scheme, ultimately settling for hundreds of millions in consumer refunds.

I think it is important to expose this argument drop by drop if it is necessary for all interested audience to understand the details of this financial risk.

If on the other hand we begin a real discussion (for real) about tokenomics, it is more than welcome as that was my first intention by approaching to you, @jonas, in your in person master class you gave us at UC Berkeley, considering all the question I did to firstly understand which were the operational costs of the protocol. I learned with you the main cost of the protocol is security and my thesis then is that the competitive price of a protocol is where the marginal cost of security is equal to its marginal income. I remember doing a question about this at California.

@hantoniu-codeberg I appreciate your interest in mediating around this rationales. I’ll come back to your mediation afterwards.

As mentioned here already and considering the above, we believe we should drop RFC-97 and do not continue with the related implementation.

Fast unbonding is arguably less critical for validators. In the future, we might want to consider enabling fast unbonding for a validator that has never entered the active set.

A related PR to implement nominators-related change for phase 1 is currently under review and aimed to be merged ASAP and deployed together with the other relevant changes for DAP / phase 1 by March if the related WFC is approved.

2 Likes

A second angle on DAP’s anti-trust liability under the new Clarity Act (I see many):

By trying to centrally manage the economy (fixing prices and wages with a high token concentration) via the DAP, Polkadot risks failing the CLARITY Act’s “Maturity” test.

The most “antitrust-compliant” path under the CLARITY Act would be to remove the 10% floor and let the market determine commissions, thereby proving that no “group of persons” controls the network’s economics.