Proposal: Dynamic Allocation Pool (DAP)

Just a small addition that as well as being deployed as additional collateral, a big advantage of eagerly minting the remainder of the issuance is as an additional source of liquidity between DOT and network stables in extremis.

If some black swan event were to cause mass liquidation of a stable backed by DOT, the positions would be liquidated and corresponding value of stables bought and burnt, potentially making the stable illiquid and having a knock on effect of the valuation of the stable (depegging). This reserve could be deployed as collateral to mint the stable to provide as liquidity and avoid a death spiral.

With overcollateralised stables you always just choose the guarantees/risk. Eagerly minting could mean that the collateral required for the stable could be lower with the same peg guarantees, or that the guarantees are better at a given required collateral.

This obviously isn’t without cost, and would be need to be investigated and simulated. This equally obviously is not a replacement for a well designed stable, and a healthy DeFi ecosystem with market makers/arbitrage

2 Likes

Well reasoned breakdown of the pros & cons here. Definitely seeing the merits to this idea @joepetrowski. I’d be most concerned about the optics as you both are mentioning. Is “marketcap” ultimately decided by only the circulating supply? Or is that not standard?

Not sure if issuance of bonds on future issuance as collateral would work to the same end if insta-mint doesn’t turn out to be popular?

Thanks a lot for all the explanations.

I see, so does this essentially mean that the self-stake could be “crowdfunded”? and if so would it be up to the validator and its external stakeholders to determine arrangements between themselves regarding the distribution of the incentive rewards the validator would receive for the combined self-stake?
Alternatively, could we envision this being done in a trustless way such as the current nomination rewards payouts? (and assuming this last scenario, from the point of view of the external stakeholder who trusts their dot to the validator, how would it differ from the current nominator role and slashing risk?)

1 Like

The only aspect I see the unbonding queue being important to validators, is for validators that don’t make it to the active set. Contrary to now where you can validate with just 1 DOT self-stake, having even the minimum of 10k DOT sitting idle is a big opportunity cost and dormant capital. This becomes exaggerated for validators that bond the optimal 30k or more.

Basically this is needed not for validators that choose to stop validating but as recuperation method for those who do but are not selected.

2 Likes

The idea behind this is that any form of crowdsourcing is between people that have an arrangement between them and self-organise the management of the self-stake through a single account, e.g. a multisig, which is also why the validator whitelists the specific account that can place the self-stake.

Theoretically it could be done trustlessly, but then you’re right, it’s not that different from nominations. But as with nominations, this removes the validator’s skin-in-the-game. So, you want trust and a relationship between the validator and the person(s) placing the self-stake in order to consider it skin-in-the-game and ensure it contributes to the network’s resilience, i.e. the validator cares if it’s slashed.

So, if we consider that the self-stake is not placed by a virtual staker, I think it would be technically too complicated (if at all possible) to do the rewards payouts trustless. And it’d be pointless too. If 2-3 people trust each other to place the self-stake, they should trust each other to split the rewards too. And if I’m not mistaken, the proposal suggests that the rewards for the self-stake go to the account that placed it, not to the validator (unless of course it’s the validator who placed it).

1 Like

If we agree to move forward with the proposal, I strongly support the idea of splitting the deliverables into phases, if possible.

For example, considering the uncertainty surrounding the timeline and some specifications of the DOT-backed stablecoin, I believe we can still deliver meaningful portions of work without that information “immediately”. e.g.

  • Introduction of the DAP/issuance buffer, including transferring any (treasury and non-treasury) burning into the issuance buffer instead.
  • Eagerly minting the remainder of the issuance - if there is consensus on that
  • No slashing for nominators

Once we have more clarity and a concrete timeline regarding stablecoin and future integration with PoP, we can discuss, agree and add the missing pieces of the puzzle.

With the above in place, we can better estimate the urgency of introducing the unbonding queue, considering what @kianenigma and @michalis mentioned and all the moving pieces around staking.

And @jonas : sorry for slightly derailing the main discussion, which should focus on the core proposal rather than implementation plans/ details.

Hi @jonas , thanks for posting this article - looks very interesting and promising

From a validator perspective, I fully agree with the concept that validator income should be derived from two components - fixed (fiat) & variable (DOT). This model ensures validator stability with being part of Polkadot potential growth

Moreover, adding a 10k self bond requirement for having skin in the game is also very logical to me - the idea is to make validators part of the ecosystem and not just an infra provider

Few things for consideration -

  1. Does the polkadot ecosystem still target 1000 validators? because if it does - it significantly impacts many of the parameters you’ve mentioned. Will they be updated accordingly? For example the self stake, minimum commission, validators income (both fixed and variable)

  2. Keep in mind that validators earnings dropped significantly in the last years due to three main reasons - price drop, set expansion & inflation cut. As we know, the inflation is going to be halved soon, and based on my understanding the set should be almost doubled in the future. On top of that, adding the 10k self bond requirement for running a polkadot validator makes it less enticing and too risky

  3. Therefore, I think we should have a boundaries for validators compensation. For example, validators’ total monthly reward (both fixed and variable) will be between $4k-$10k per month, regardless of DOT price. Meaning whether DOT is $1 or $!00 - the monthly compensation will be between $4k-$10k (this range is just an example - can be discussed in more details)

  4. If validators are required to have 10k self bond (that are slashable), they should have better monthly protection. I wouldn’t suggest this range idea if there wasn’t a 10k self bond requirement, but if we expect validators to run their nodes with strong infra in diversified locations, spend their time and expertise for securing the network, and on top of that invest 10k DOT (which I’m in favor) per node - then at least they should have a solid ground while running Polkadot node

*** Disclaimer - I’m runnig 6 Polkadot validators (two of them are DN nodes) with ~70k DOT self bond, combined

Best,

Rafael

1 Like

I agree that we should be thinking about how to split this into reasonable deliverables. Some of the components make only sense to integrate (and launch) with other components. Other pieces might be rather stand-alone. With regard to the stable-coin: the DAP is rather unaffected by when it launches, because the initial budget to mint could simply be 0 and we could change and update the DAP later to access the stablecoin “API”.

I’ll reach out to parity devs to discuss roadmaps.

1 Like

Hi Rafael

  1. Does the polkadot ecosystem still target 1000 validators? because if it does - it significantly impacts many of the parameters you’ve mentioned. Will they be updated accordingly? For example the self stake, minimum commission, validators income (both fixed and variable)

Eventually we want to scale the number of validators. I don’t know what is the plan for the immediate future but given the fact that there is enough cores available for the demand and additional validators cost money to maintain and incentivize, I’d argue that it does not make sense to scale further in the short-term. The parameters are scaled to work with 600 validators.

  1. Keep in mind that validators earnings dropped significantly in the last years due to three main reasons - price drop, set expansion & inflation cut. As we know, the inflation is going to be halved soon, and based on my understanding the set should be almost doubled in the future. On top of that, adding the 10k self bond requirement for running a polkadot validator makes it less enticing and too risky

The example budget in the proposal is accounting for this in two ways. 1) a part of the validator payment is fiat-denominated via the stablecoin and 2) the DOT rewards for validators are recomputed.

  1. Therefore, I think we should have a boundaries for validators compensation. For example, validators’ total monthly reward (both fixed and variable) will be between $4k-$10k per month, regardless of DOT price. Meaning whether DOT is $1 or $!00 - the monthly compensation will be between $4k-$10k (this range is just an example - can be discussed in more details)

There is only a limited range of actions when it comes to price volatility as it affects all stake holders (Validators, Treasury, Nominators). The “best” we can do is to give some reasonable fundament with the fixed reward (denoted in $) but ultimately, validators are part of the ecosystem that is subject to volatility.

  1. If validators are required to have 10k self bond (that are slashable), they should have better monthly protection. I wouldn’t suggest this range idea if there wasn’t a 10k self bond requirement, but if we expect validators to run their nodes with strong infra in diversified locations, spend their time and expertise for securing the network, and on top of that invest 10k DOT (which I’m in favor) per node - then at least they should have a solid ground while running Polkadot node

The codebase of Polkadot is so mature that a slash on self-stake only happens if validators intentionally run malicious software or do very basic operational mistakes. In other words, a capable validator should never fear a slash on their stake if they are honest. That means, we can expect them to increase their self-stake.

Hi Jonas,

Thanks for the detailed answers

I Just want to better clarify my comment about the 10k self bond requirement - My point wasn’t about the risk of slashing (which as you said - shouldn’t happen for a good operator), but for the risk of investing

If we expect validators to have skin in the game (which again - I’m 100% into it), then they should have a reasonable ground protection in their monthly income, otherwise it might be too risky to run a Polkadot validator

Anyhow seems that we’re aligned here

1 Like

Thx Jonas for sharing the information with the community. :innocent:

Hi @joepetrowski and @seadanda

Thanks for the comments and starting a discussion on this. Using the future issuance as collateral to back the stablecoin today is definitely an interesting idea and worth exploring and I think the final assessment of this idea depends a lot on the details and the design of the stablecoin. While this is a pretty important decision to make, it appears to me that it has only limited impact on the design of the DAP. Whether it distributes freshly minted token or “unlocks” them from a preminted pool makes little difference here.

Nevertheless, I want to share some critical thoughts that I with this. Feel free to disagree and let me know if you see things differently.

The first thing that we’d need to accept is that the future income as collateral is only meaningful if we actually implement a mechanism that could mint these DOT and liquidate them at any time. In a situation with high price volatility, where this becomes necessary, we would then potentially violate the issuance schedule and increase the supply of DOT beyond the agreed-upon threshold. This risk is something that the community must explicitly agree to, because it could permanently alter the total supply curve.

Apart from this fundamental fact, there are some more points that come to mind:

This idea of “future revenue as collateral” has precedent. For example, when you lease an apartment or take out a mortgage, usually the counterparty will ask for proof of income (e.g. an employment contract). They are allowing you access to a resource (a home) based on presumed future income.

This is true. There is, however, a slight difference in our case, because the protocol has full control about the future income and can mint any amount at little costs. In contrast to that, the buyer in your example has to work for all future income and expend life energy. On top of that, the collateral has real-world value that is not affected by the buyer defaulting. In our case, a default would also decrease the value of the collateral drastically. I’d argue the better example would be some nation state expanding its fiat money supply by printing in time of crisis to sustain large immediate expenses. In some cases this worked out fine and the “debt on the future” was paid back, in other cases it was not and created huge inflation and eventual collapse.

That brings me to the second point, risk. In the prior example, there is risk that the renter/buyer will default (e.g. they lose their job and by extension the “collateral”). But this hard cap of issuance is literally guaranteed by the protocol itself. As in, there is zero risk that the protocol does not issue the future DOT up to the hard cap.

I’d disagree with you treating the future DOT issuance as “zero-risk” and making no distinction between having it now or in the future. The hard cap is suppose to be reached by 2160. I don’t think the market expects the Polkadot protocol to be alive by 2160 with 100% probability. It might even be questionable whether humanity is still around 30 years from now (where arguably most of the future DOT would be minted).

So, if you have a zero-risk guarantee of future income, and you have utility for that income now, it seems rather ascetic to voluntarily eschew that utility.

I agree that it would be great to tap into that utility today but, again, it is not risk-free. So, ascetic could also mean conservative, a potentially desirable property of a stablecoin design.

There is also a critical oracle-related risk, where the price oracle malfunctions and potentially trigger a liquidation of all future income. This not only harms the stablecoin itself but would also be detrimental for DOT. While certain safeguards could mitigate this, the underlying issue remains.

If we agree that this approach is far from risk-free, the only credible way to treat future issuance as collateral today is to apply very heavy discounting. That discounting must incorporate all the risks discussed above (and likely several more). This immediately raises a fundamental question: What is the actual value of such collateral? We (or the market) must assign a value to it, otherwise we cannot determine how much stablecoin can be minted against it, nor assess the system’s health (i.e., the ratio of USD-denominated economic mass to the collateral base). Using the current DOT price would be problematic, since that price reflects the assessment of the current 1.6b circulating supply, not on a hypothetical future supply ~2.1b.

To summarize, the idea is interesting and warrants further discussion but it is also very sensitive to the implementation details and the community sentiment. The robustness of the stablecoin ultimately depends on the value of its collateral, and in a single-asset design this value is tightly coupled to confidence in that asset. In a crisis, this linkage could create a death-spiral: falling trust reduces the perceived value of the collateral, prompting further unlocking or liquidation of DOT, which in turn further erodes trust. This dynamic endangers not just the stablecoin’s peg but also DOT itself. Tying future issuance to a mechanism that allows it to be “summoned” for liquidation means significant systemic risk for DOT (compared to “only” risking pUSD).

In my current proposal, I am overcollateralizing (and further securing) the stablecoin with “real DOT”, which I think is the more robust approach. Admittedly, the daily inflow pattern means that we are a liquidity constrained at the beginning. This problem could be solved by tapping into the Treasury funds and taking a loan from the Treasury to front-load some liquidity for the stablecoin.

One of the core problems in the Polkadot ecosystem is the constant over-engineering—building tools and abstractions users don’t actually need. Instead of adding more complexity, maybe it’s time to focus effort where it matters: rescuing parachains like Polimec and KILT, strengthening real use cases, and investing in something the ecosystem desperately lacks—a solid privacy layer.

1 Like

“real DOTs” will not exist if the buying/selling pressure is not revealed by a “real buying/selling” liquidation. It is interesting that a cryptographic primitive would signify different things for the same token, DOT, depending on the context. It is like when money is loaned, versus perpetually generated by revenue (hopefully in a distributive economy of many, not just a few holders); Money is fungible, but its utilisation gives it instrumental properties which can vary substantially depending on its context.

You can have any quantity of DOTs issued, but if they don’t hit the market demand with a revealed price of a “real transaction” between two peers, then its value is subjective given by user’s interpretation of risk and other real transactions as you well mentioned.

In this sense, this architecture looks very similar to a perp dex-style vault + insurance fund + funding-rate balancing.

The similarities:

  • Both are overcollateralized vaults that lock a volatile token to issue a spendable “stable” claim, with automatic liquidation/buffer mechanisms to protect solvency.
  • Both use a continuous, algorithmic payment/flow (funding rate vs. smoothed outflow schedule) to prevent the system from drifting too far in one temporal direction and keep the “perpetual” structure sustainable.
  • Same role: a hoard of volatile tokens that absorbs shocks so the rest of the system (traders or protocol spenders) isn’t wiped out when collateral value crashes.
  • Both systems use parametric curves and on-chain signals to steer participant behavior toward a target equilibrium (balanced longs/shorts vs. uniform validator stake).
  • The core promise is identical: turn a finite/decaying resource into a sustainable perpetual stream through clever financial engineering.

It looks like my question has gone unanswered. If we are proposing paying validators with something called a stablecoin, we should make sure it is actually that. A stablecoin backed by future revenue is not a stablecoin, it is unsecured debt.

I will be posting this AGAIN because it was flagged as spam and removed. I thoroughly think it should be considered for the purposes of building a global and stable DLT. I will be waiting for a cordial explanation on why this post was considered:

“Your post was flagged as spam : the community feels it is an advertisement, something that is overly promotional in nature instead of being useful or relevant to the topic as expected.”

The post:

On the Role of Purchasing Power and the CRRA Assumption in the DAP Model

Once again, the proposed economic framework appears detached from purchasing power—a foundational component in any tokenomics model. I’d like to re-emphasize this point, as previously discussed in Polkadot’s Economic Resilience and the Role of Inflation.


1. Purchasing Power: The Missing Anchor

The DAP model is built upon deterministic issuance, consumption smoothing, and Constant Relative Risk Aversion (CRRA) assumptions. But it never grounds these flows in the real value of DOT, i.e., its purchasing power.

What constitutes wealth here? The proposal blurs lines without probabilistic weighting. Consider this breakdown:

Wealth Type Description Relation to Purchasing Power DAP Handling
Coretime/Transaction Revenue Protocol income from blockspace and fees Scales with adoption; volatile in fiat terms Inflows to DAP, but not dynamically modeled
Minted DOT Supply Seigniorage from issuance (e.g., 55M DOT/year post-2026) Dilutive if unbacked; erodes value in bear markets Primary funding, smoothed via reserves
Productive Value Security/blockspace utility Emergent from network use; ties to validator ROI Implicit in incentives, but static ( r = 0 ) assumption

Treating “allocation” as “wealth” (with 1:1 equivalence over time) implicitly assumes stable DOT value—yet that assumption itself needs modeling, especially with exogenous shocks (e.g., 50% price drops halving attack costs). Without anchoring to purchasing power, CRRA loses validity for real resilience.

To stress-test: Assign rough probabilities to scenarios—what’s the expected utility under a 30% crash likelihood?

Scenario (Prob.) Purchasing Power Δ Reserve Erosion Risk Mitigation?
Bull (40%) +20% Low Revenue boost
Stagnant (40%) 0% Medium Baseline smoothing
Crash (20%) -50% High Dynamic triggers needed

2. CRRA and Log Utility: Is It Justified?

Under CRRA, the model assumes:

Screenshot 2025-11-10 at 10.30.13 PM

with optimal consumption declining ~9% annually. Yet CRRA presumes a behavioral agent optimizing real utility over time. In this case, there is no “agent”—only a deterministic protocol distributing funds to validators, Treasury, and reserves.

Thus, the log form becomes a formal convenience for tractable solutions—not an economic truth grounded in network behaviors like validator exits or capital flight.

If the goal is resilience or utility, why not consider a convex utility aggregator over dimensions like {security, liquidity, adoption}, derived from actual system metrics? A Bellman equation approach—

Screenshot 2025-11-10 at 10.30.57 PM

—with state variables like staking rate and DOT price could better capture feedback loops over static math.


3. The Discount Factor β ∈ (0,1)

The introduction of a discount factor (e.g., β = 0.91) creates a manual time lever in protocol design, balancing early ecosystem bets with long-term reserves.

In macroeconomics, β reflects human impatience. In Polkadot, it becomes a governance dial—tunable to shift funds across eras, potentially via OpenGov tracks or advisory committees.

This raises centralization risk. Why not derive β from on-chain observables—like staking velocity, validator cost deviation, or adoption metrics? Endogenizing it via dynamic programming would align with actual network signals, avoiding arbitrary tuning.


4. Governance and Allocation Efficiency

If decisions around the incentive shape, β, or stablecoin collateralization remain in the hands of a small advisory committee, how can stakers verify that allocations optimize network utility?

This setup invites governance capture unless it’s anchored in measurable, on-chain metrics such as:

  • Validator cost coverage ratios (e.g., vs. $2k baseline)
  • Treasury execution efficiency
  • DOT purchasing power over time (e.g., fiat-adjusted reserves)

Without transparent dashboards or oracle-derived indicators, efficiency remains speculative. Proposal: Embed KPI oracles for real-time audits?


5. Stablecoin Linkage: Closing the Loop

If DOT-backed stablecoins will be minted by DAP flows (e.g., overcollateralized at 150% for validator and Treasury fiat needs), purchasing power becomes endogenous to protocol solvency.

Key open questions:

  • Who ensures reserve adequacy amid volatility?
  • How are liquidation risks modeled (e.g., collateral buffers, backstops from strategic reserves)?
  • Where is the feedback loop between inflation, volatility, and real obligations?

Leaving the stablecoin outside the optimization loop creates a disconnect between issuance and real-world economic costs—precisely what purchasing power analysis resolves.


:red_question_mark: Key Questions

  1. What justifies treating allocation as “wealth” for applying CRRA (especially with ( r = 0 ))?
  2. Why log utility? Is there a principled rationale, or is it for mathematical simplicity?
  3. Can time preference (β) emerge endogenously from on-chain behavior instead of being fixed?
  4. How can stakers evaluate the efficiency of economic choices made by an unelected committee?
  5. Isn’t purchasing power a required variable in the model’s objective function?
  6. If backed by a stablecoin, how will liquidity and resilience be maintained—e.g., via overcollateralization, reserve buffers, or automated rebalancing?

The DAP is a mathematically elegant construct, but its optimization remains nominal unless grounded in real value. Without integrating purchasing power and liquidity feedbacks, the model risks optimizing abstractions rather than building true economic resilience.

Let’s iterate—@GehrleinJonas, any thoughts on Bellman simulations or Kusama testbeds to further stress-test these dynamics? Community, what’s your take on endogenous β?

Cheers,
Diego (@DiegoTristain)

  1. DAP has no external revenue; it only redistributes inflation.All DAP funds still come from Polkadot’s inflation, with no real cashflow entering the system. It merely reallocates a portion of validator rewards in a different formula, without improving the network’s financial sustainability.
  2. DAP addresses the supply side, while Polkadot’s problem is on the demand side.Currently, blockspace has almost no paying demand. Whether coretime is cheap or not, there is little actual usage. With demand near zero, any allocation formula cannot create adoption.
  3. The timing is wrong: DAP creates an illusion of action without real ecosystem growth.Polkadot needs users, dApps, developers, and enterprise adoption—not more complex economic formulas. While the mechanism itself is not inherently flawed, implementing it under weak demand risks being a case of misplaced priorities.

I want to share another perspective that arose on the presentation of a rhetoric model I used as a form of criticism on the economic governance direction main actors in the Polkadot’s ecosystem are proposing.

You can check the rhetorical model in this forum thread : 🔐 Token Syndication, Economic Distribution, and Distributed Security in Polkadot - #2 by wariomx

My main Request For Comment forum thread here: RFC-0152: Decentralized Convex-Preference Coretime Market for Polkadot

And the second iterations of my proposed model extended to “n” goods/tokens: GitHub - onedge-network/Extended_convex_economy: Extension for the the two-good zero-sum exchange model originally developed in \emph{Emergent Properties of Distributed Agents with Two-Stage Convex Zero-Sum Optimal Exchange Network} (\cite{Tristain2024}) to an economy with \(n \ge 2\) goods.

Does the high concentration of voting power (low f) imply the governance layer is already insecure (BoA > CoC)?

Short answer: Yes, the risk is real under today’s tail dynamics. In power-law distributions, inequality within the active cohort can far exceed that of total holders. If the “Tail” (small voters) is passive, the effective f falls sharply. Your “Tail model” intuition matches the mechanics many of us worry about.

Why f mechanically drops: two scenarios

We often mistake f for a protocol constant (e.g., “51%”, “33%”). In OpenGov, the attack surface depends on the active voting set, not total stake.

Scenario A — High participation / distributed (High f)

  • Thousands of independent voters participate.
  • To capture a decision, an attacker must swing a broad, diverse coalition (moving along/near the line of equality).
  • Result: Effective f ↑; CoC = p • S • f stays high.

Scenario B — Whale domination / low participation (Low f)

  • A handful of entities dominate active voting power (deep inequality curve).
  • If ~5 entities hold ≈60% of the active vote (even with a small share of total S), compromising those few is sufficient.
  • Result: Effective f ↓; CoC collapses toward a simple bribe.
  • Risk: Security flips if BoA outgrows this shrunken CoC.
Figure 1 — Lorenz intuition (ASCII)

Share of voters ↑
^
| 1.0 |           /  (Line of equality)
|     |          /
|     |         /
|     |        /
|     |     __/        Scenario A: mild curvature (higher f)
|     |  __/
|     | /_             Scenario B: deep tail (lower f)
|____/_/_____________________________
   0               Share of stake →             1.0

The fix: dilute politics with economics

We won’t repair a political-centralization problem by politely asking whales to participate less. We need to expand the active set from political clickers to economic producers/consumers—so influence is determined by throughput/GDP-like activity, not static balances.

This is where DAP (Allocation) and the Emergent Properties of Convex Economy / RFC-0152 (Security Coupling) naturally pair as two halves of one whole.

1) Allocation layer — DAP (stability & budgeting)

The DAP proposal inserts a buffer pool between issuance and outflows, smoothing payments and helping fund real-world expenses over time. However, this reveals a hidden variable: k.

  • The Variable: k is the Cambridge Constant (inverse of Velocity, 1/V), representing the “stickiness” or demand for settlement.
  • The Trap (k → 0): Price is derived from p = (GDP / S) • k. If the ecosystem routes payments purely in stablecoins without enforced DOT sinks, velocity becomes infinite and k → 0.
  • The Consequence: Even if network GDP is high, if k is near zero, p crashes. Since CoC = p • S • f, security collapses.
  • The Fix: DAP is only secure when inputs/output ultimately couple back to DOT (via settlement), restoring k.

2) Security layer — RFC-0152 Extended (re-coupling)

This is where the model secures the inflow and restores k. It replaces human/political pricing (auctions) with a mathematical “siphon”—a convex geometry that naturally drains value from the application layer into the settlement layer (DOT) through optimal matching.

The Mechanism: Atomic Coupling via Convex Geometry
RFC-0152 prevents users from bypassing the economic logic of the chain via a Convex Clearinghouse—a protocol-level algorithm where the only way to trade resources is to pass through a specific mathematical “gate.”

  • The “Reaction” (Universal Input): Agents submit two things: a non-zero Asset Endowment (e.g., “I have 5 USDC and 0.1 Coretime”) and a Preference (α) (e.g., “I want 50% Coretime / 50% USDC”).
  • The “Diffusion” (Convex Solver): The network matches buyers and sellers through a convex optimization function.
    • Influence is Bounded: All endowments secure the whole system. An agent’s influence on outputs is strictly limited to their declared initial endowments, transformations (value creation), and their preference parameter α. There is no other way to modify the outputs.
    • Stability: This conversion is atomic and deterministic. A user cannot “bribe” a validator to receive Coretime; the protocol mathematics is the exchange rate. Stability is guaranteed for the whole system if enough trading connections are met.
Figure 2 — Convex Clearinghouse (ASCII)

             (Endowments + Preference α)
[ Users / Builders ] ───────────► [ Convex Solver ] ◄────────── [ Liquidity / Peers ]
                                         │
                                         ▼
             (P2P Endogenous Atomic Swap + Forced Settlement)
                                         │
                 ┌───────────────────────┴───────────────────────┐
                 ▼                                               ▼
      [ User gets Coretime ]                           [ DOT Sink / Treasury ]

This restores CoC > BoA via two channels:

  1. Diversification (f ↑): Economic activity syndicates control across many builders/operators. The effective coalition size required to corrupt grows with GDP-like activity, which is harder to monopolize than static governance stake.
  2. Coupling (p ↑ via k): By forcing settlement through the convex solver, we enforce a lower bound on k.
    • Agents effectively bid up the demand for blockspace, flowing value to DOT holders/treasury.
    • They freely match their supply/demand in the market, but they operate under the strict commitment constraints defined for the model.
    • Result: k remains healthy, so as GDP grows, p grows, and Security scales with Usage.

I was recently commenting on a Perpetual DEX–style analogy for the DAP in another thread, but I believe this comment by @joepetrowski is even more directly suited to that line of reasoning.

By comparing the Dynamic Allocation Pool (DAP) to the architecture of a Perpetual DEX Vault, it becomes clearer why the idea of pre-issuing the full capped supply and using it as collateral is not just risky — it’s potentially counterproductive to the goals of stability and sustainable tokenomics.

Opened to discussion:


I’ve argued that @jonas ’ DAP proposal is effectively a Perpetual DEX Vault managed by a protocol-level algorithm. If we map the components, the isomorphism becomes clear:

  • The Vault → the Allocation Pool (buffering volatility)
  • Funding Rate → the smoothing parameter (β), which adjusts outflows to preserve solvency
  • Insurance Fund → the underlying Inflation + Treasury creating the “security budget”
Perpetual DEX Component DAP Component (Jonas’ Proposal) The Mechanism
Liquidity Vault (GLP/ALP) The Allocation Pool A buffer of assets (DOT) used to absorb shocks between inflow (revenue) and outflow (spending/rewards).
Funding Rate / Rebalancing Smoothing Parameter (ദ) A variable rate that adjusts to balance the “inventory.” In a Perp Dex, the funding rate penalizes the heavy side to force equilibrium. In DAP, ദ adjusts the “spend” to keep the pool solvent over time.
Insurance Fund Treasury / Inflation The backstop. If the Vault (Pool) runs dry due to a “price crash” (revenue drop), the system prints more DOT (inflation) or drains the Treasury to cover the liability.
Traders (Long/Short) Coretime Buyers / Validators The participants extracting value from the Vault.

The Economic Problem with DAP proposal:

However, if by virtue of similarity of the DAP proposal we contrast it with a “Perp DEX”-style architecture it reveals the exact economic trap I warned about in my critique of Jonas’ proposal here:

In a functional Perp DEX (e.g., GMX, Hyperliquid), the Vault is sustainable because external traders pay real fees (via PnL or funding) into the system. There is a purchasing power transfer from the outside world into the Vault.

In the current DAP model, however, we are building a “Perp DEX” where the protocol trades primarily against itself.

We treat blockspace allocation as “revenue,” but if that revenue is sourced from internal rotation (re-staking, treasury swaps, validator churn) — rather than from external market demand (Cambridge Coefficient k) — then the Vault is simply washing volume.

As I mentioned in the DAP thread:

“When a system fails to couple its ‘GDP’ (activity) back to its ‘Reserve Asset’ (DOT) via enforced sinks, the velocity of money becomes infinite, k → 0, and the Cost of Corruption (CoC) collapses.”


If we implement a “Perp DEX” architecture without a Purchasing Power Anchor, then we’re automating the devaluation of the Insurance Fund (i.e., DOT holders). It becomes a sophisticated machine to burn our own equity with no innovation value.