Diego raises a point that deserves more direct engagement from DAP proponents: the model optimizes nominal token flows, but network resilience ultimately depends on real purchasing power.
To put it concretely: if DOT drops 50%, the protocol still issues the same number of tokens, but validators still have fiat-denominated costs (hardware, bandwidth, labor). The “smoothed consumption” the model achieves exists only in DOT-denominated terms—it doesn’t smooth the actual economic capacity of the network to pay for security.
That said, I’d push back slightly on the proposed remedies. Embedding DOT price as a state variable in a Bellman framework sounds theoretically sound, but it creates oracle dependencies and potential feedback loops that could introduce new instabilities. If issuance responds to price, you risk pro-cyclical dynamics where falling prices trigger reduced issuance, which could further depress staking yields, which could accelerate sell pressure.
A middle path worth exploring: rather than making the core model price-aware, could the reserve mechanism include explicit drawdown triggers based on purchasing-power thresholds? Something like “if validator cost coverage falls below X% of baseline, release Y from strategic reserves”—rules-based, transparent, but anchored in real costs rather than nominal allocations.
The cost baseline itself could be set via governance vote on a quarterly cadence rather than pulled from a price oracle. Validator operating costs (hardware, electricity, bandwidth) move slowly compared to token prices, so a low-frequency governance update is sufficient. This avoids the real-time oracle dependency while still grounding the model in economic reality. And validators themselves become a natural check—if governance sets costs too low, they’ll push back (or exit); if someone tries to inflate costs artificially, token holders resist. Adversarial balance rather than a single point of failure.
It’s also worth noting that this isn’t the only approach being discussed in the ecosystem. The burn-based tokenomics RFC for Kusama takes a different tack entirely—rather than optimizing allocation via complex modeling, it proposes simple burn mechanisms that create demand-responsive scarcity automatically. High usage leads to more burns, low usage doesn’t. The market determines scarcity rather than a preset formula. Whether or not you prefer that model, it highlights that there are simpler alternatives that achieve purchasing-power feedback without the CRRA assumptions Diego is questioning.
Curious whether any simulations have stress-tested the DAP under historical volatility scenarios (e.g., the 2022 drawdown).