Invitation to Critically Evaluate Core Time Pricing Model Framework

Hello,

As the next step in seeking constructive criticism for my proposal to the core time pricing model problem, I would like to advance the discussion by clarifying the mathematical and economic properties that a satisfactory solution should ideally exhibit. My intention is to open these characteristics to peer scrutiny and collaborative refinement, particularly in relation to embedding such a model into decentralized pricing systems.

Below are the current properties exhibited by the solution I have derived. I propose this list as a starting point for discussion:

  1. Decouples agents’ preference layer from settlement price in a deterministic manner;
  2. Eliminates traditional price/quantity assumptions, instead modeling preferences as convex functions (as opposed to the more restrictive semi-linear assumptions);
  3. Ensures smooth and differentiable price evolution, allowing for continuous adjustment;
  4. Converges to a unique and stable equilibrium price under broad initial conditions;
  5. Operates in a fully peer-to-peer (P2P) architecture, well-suited for decentralized environments;
  6. Remains computationally efficient and cheap per transaction, even at scale;
  7. Achieves Pareto optimality, ensuring no agent can be made better off without making another worse off.

This solution stems from an original research initiative I began in 2000 during my economics studies in Chile. I am currently exploring how its formal structure maps isomorphically onto field elements, preserving core functional relationships—a direction that is yielding promising theoretical results.

The full mathematical framework, along with a simulation of its application in a cellular automata scenario (used here as a pedagogically clean presentation of the emergent dynamics), is available here:
:link: GitHub - onedge-network/Emergent_Properties_paper: Repository for the last version of the paper: "Emergent Properties of Distributed Agents with Two-Stage Convex Zero-Sum Optimal Exchange Network"

Notably, even in fully randomized transaction environments—where agent pairs and parameters are drawn arbitrarily—the system exhibits the same convergence behavior. This suggests that the cellular automata is not essential to the mechanism, but rather an illustrative substrate that preserves the model’s underlying economic properties across generalized conditions.

I look forward to your thoughts, critiques, and ideas on how this list of characteristics might be improved or rigorously tested in other economic or computational settings.

Best regards,

1 Like

Thank you for your post and your efforts to improve the existing core time pricing model.
Could you explain how your proposed model works—perhaps using a simple example?

I skimmed through your PDF (admittedly not in great detail), but I couldn’t immediately grasp how the model functions.

1 Like

The Coretime pricing RFC is open here for your evaluation https://github.com/polkadot-fellows/RFCs/pull/17

3 Likes

Thank you for your interest and for taking the time to look into the proposal.

In simple terms, the model I’ve developed is a decentralized pricing mechanism that tries to solve the “core time pricing” dilemma by detaching agents’ subjective preferences commitment at a first stage from the actual price formation mechanism at a second stage—yet still achieving a globally stable, fair, and efficient outcome.

At its heart, the model assumes that:

  • Each agent selects a parametric preference over two assets (e.g., coretime vs. token) before interacting (e.g. a rational number between 0 and 1).
  • These preferences are convex but not semi-linear, giving room for richer utility structures.
  • When agents interact, their transaction is settled deterministically, based on an internal Pareto optimal rule that doesn’t assume an external market price.

Because of the structure of the interaction, and how price is inferred after the fact (rather than imposed up front), the global system evolves with a smooth, differentiable price curve that always converges to a unique equilibrium—no matter the order or randomness of agent interactions.

I use cellular automata (CA) in the paper as a simplified sandbox: it helps to illustrate how these individual preference-matching interactions aggregate into emergent global pricing behavior. But the results aren’t bound to the CA setting—randomized simulations confirm that the same convergence happens in fully general, non-grid peer-to-peer topologies.

You can check the mathematical framework description in this paper:

And a render of its evolution over time for the illustrative case of a continuous cellular automata with 4 arbitrary hungry spot of one asset or the other, all the rest randomised:
https://github.com/onedge-network/Emergent_Properties_paper/raw/refs/heads/main/example_sequences/first.mp4

Thank you for pointing me to RFC #17 — I’ve reviewed it closely, and I appreciate the clarity it brings to the proposed renewal bidding mechanism.

What I’d like to propose for discussion is whether the set of properties exhibited by the emergent pricing model I’ve developed may offer a more robust foundation—particularly when considering anti-trust resilience, decentralization, and peer-level fairness.

Instead of relying on external price signals or structured auctions, my model:

  • Detaches preference expression from price formation in two stages,
  • Assumes no predefined quantities or price anchoring,
  • Converges on a unique, stable price through deterministic peer-to-peer interactions,
  • Guarantees Pareto optimality across random network conditions,
  • Maintains smooth price dynamics without stepwise or externally triggered resets.

Given these properties, I believe it is worth evaluating whether such a mechanism aligns better with Polkadot’s decentralization ethos—especially under the lens of avoiding structural advantages that bidding systems might inadvertently reinforce (e.g., front-running, information asymmetry, or core-hoarding).

To this end, I’m open to collaboratively refining the desired properties we want a Coretime pricing model to exhibit. If this list (or a version of it) is considered desirable, I’d be glad to work toward incorporating these principles more formally, perhaps through an alternative or parallel RFC path.

This creates a purely decentralized mechanism for Coretime valuation where pricing is not imposed but arises from endogenous interactions. In contrast to structured bidding, it avoids issues like hoarding, frontrunning, or asymmetry in renewal conditions.

I’m very open to discussing and refining what set of properties we ideally want in a Coretime pricing mechanism. If the list above (or a variation of it) is desirable, I believe this approach could either serve as a full alternative to the current bid-based design, or be used in simulation or testbed settings to benchmark financial behaviour and convergence properties.

Thank you — I understand it a bit better now.
But how would a customer come up with such a convex preference function?

At first glance, it seems quite complex to define a curve like that and understand its implications.

1 Like

Why do I believe this model has an advantage ?

Because each agent’s convex preference parameter defines a convex set of acceptable prices they are willing to evolve toward across future exchanges—not just a single bid or discrete valuation. This means that every peer-to-peer interaction implicitly narrows the feasible price interval for each agent, progressively aligning local valuations through deterministic optimization.

Due to the mathematical properties of convex programming applied at each transaction level, these interactions naturally converge toward a unique global price—even in the absence of global consensus rounds or centralized auction phases. Local interactions alone are sufficient to coordinate system-wide price stabilization, making this model especially suitable for decentralized, trust-minimized environments.

Importantly, the model’s initial setup is lightweight: each agent specifies a starting endowment of assets A and B (e.g., Coretime and a utility token), along with a scalar preference parameter.

The continuous cellular automata render provided is a visual demonstration of this process: the evolving magenta gradient represents the emergent local price ratio across the network, with darker areas converging faster. It’s a pedagogical tool to reveal how structured price emergence arises from an unstructured starting condition—showing how preference diffusion and equilibrium discovery occur simultaneously and smoothly.

It could be illustrative to show how price average evolves under this scenario:

Here’s is a rendered example graph rendered using the model proposed:

You’re absolutely right — and that’s exactly where I believe the value of this model lies.

Despite the apparent complexity of convex preferences, the mechanism does not require agents to explicitly define or even understand a utility curve in the traditional sense. All that’s needed is a simple scalar preference parameter (e.g., a rational number between 0 and 1) and their initial asset holdings. From there, the system handles everything else.

What’s powerful here is that, once agents declare their convex preferences and assets, every peer-to-peer exchange deterministically contributes to a global Pareto-optimal outcome. In a network with perfect information after the fact (i.e., once preferences are set and interactions begin), the result is network-wide convergence to a stable, fair, and unique price—without needing agents to strategize, bid, or anticipate others’ behaviour.

In other words:

  • Enables simplicity at the edge (for each agent)
  • Complex coordination in the aggregate (through deterministic optimization and emergent price discovery).

That’s where I believe the core value proposition of this approach lies.

1 Like

If the question is whether this model stems from a solved convex optimization problem, then yes—it does.

I originally derived the formulation back in 2001, and I’ve been analyzing its implications ever since. More recently, I’ve been validating its behavior through simulations using Rust—visualized with cellular automata and extended to fully randomized setups where pairwise interactions occur with arbitrary order and parameter values.

The result is consistent:

The system always converges to a unique equilibrium price, guaranteed by the structure of the convex programming solution.

The cellular automata are just a clean visualization layer—the behavior generalizes to any topology under the same assumptions. I’d be happy to walk through the proof structure or simulation logic if there’s interest.

We only have a single commodity type really, well even stuff like on-demand vs full cores seems hierarchical, which maybe limits the applicability of anything like this, no? It’s kinda different at different times of course, but again that’s fairly cut n dried.

1 Like

Hey, interesting framework. I appreciate the direction you are taking here, especially the focus on convex preferences and Pareto efficient interactions instead of auctions.

That said, I’m trying to understand how this translates into something implementable. A few specific points I would like clarification on:

Who exactly are the agents interacting in this setup? Since the network is the sole seller of coretime, it is not immediately clear how “pairwise interactions” play out in practice. Is that meant literally or more as a modeling abstraction?

How is demand meant to be expressed by buyers? Would this be on-chain as a function or just discrete bids?

What is the actual pricing mechanism? Is there a rule or function that adjusts the price based on observed demand over time?

Is there any assumed network topology or P2P discovery protocol involved, or is that just a conceptual device to reason about convergence?

Do you envision this as a replacement for auctions in the coretime market on Polkadot, and if so, what would a minimal viable version of it look like in a runtime context?

Just trying to understand how to move from theoretical framing to protocol design. Any additional structure, simulation, or even a rough outline would be helpful.

1 Like

Thank you for your reply.

That’s a valid observation — Coretime does function as a single, uniform commodity at the protocol level. But I’d argue that’s precisely what makes this kind of model both applicable and valuable.

My proposal assumes a two-asset economic framework:

  • Coretime, treated as a uniform good, and
  • a utility token (potentially DOT or a derivative), used as the medium of exchange and valuation.

This setup is grounded in classical general equilibrium theory under convex preferences—a well-established and analytically tractable assumption in economics. In this framework, it’s not the number of distinct commodities that matters, but rather the heterogeneity of agent preferences over the relative value of the two assets. Even with just one resource and one token, meaningful and dynamic pricing emerges.

So while the commodity itself may be fixed, the valuation space remains rich. The scalar preference parameter allows agents to express situational or temporal urgency, enabling decentralized coordination without the need for additional market layers or instruments.

In fact, the simplicity of the underlying commodity is a strength: it allows the protocol to focus on fair, efficient, and emergent price formation, without introducing unnecessary governance overhead or market complexity. The model remains robust across network states and usage conditions, with convergence guaranteed regardless of interaction order or topology.

In short: even in a system with a single uniform commodity, when paired with a utility token and convex agent preferences, we can unlock the full potential of decentralized economic coordination—either through local peer-to-peer interactions, or as a deterministic global outcome that is verifiable on the consensus layer, since each transaction step is uniquely determined by the agents’ declared parameters.

Hi @labormedia

thanks for your efforts to improve the current coretime situation. I’ve already mentioned in the gh-issue of RFC17 that the proposed market design has undergone extensive review and discussions and, given the urgency of the situation, I am formulating my reply with that in mind. I have a few points on what I could gather about your proposed idea.

1. Missing overview and details

In general, this seems to be more the foundation for a mechanism that still needs to be applied to the coretime situation on Polkadot, which means specifying concrete specifications for the overall market structure, the agents’ interaction layer, and how these processes would be implemented on-chain.

2. Practical challenges of rich preference specification

You emphasize that agents must declare potentially complex convex preference functions (that allow for “richer utility structure”). While this appears elegant in theory, because it allows to solve for a general-equilibrium solution in a pareto-optimal way, in practice it imposes a heavy burden on users. If humans could reliably specify rich utility curves, we’d already see systems where users state these preferences and let an optimizer handle allocation. Yet this type of allocation mechanism is virtually absent in reality. This is why other mechanisms such as auctions are so prominent. Even there, bidders must be somewhat clear of their preferences (their valuation) which often is challenging, despite the fact that it is a single value. Yet, auctions haven been proven countless times to be an efficient (the bidders with the highest preferences get the goods) and revenue-effective mechanism.

3. Market inefficiencies and ethos

Here, I mostly refer to this quote:

Given these properties, I believe it is worth evaluating whether such a mechanism aligns better with Polkadot’s decentralization ethos —especially under the lens of avoiding structural advantages that bidding systems might inadvertently reinforce (e.g., front-running, information asymmetry, or core-hoarding).

I don’t think the “decentralization ethos” is a suitable dimension to compare the proposals. The important part is that both markets are suppose to be permissionless with transparent on-chain rules that treat every participant equally. This is the case and in absence of any things like “whitelisting” or such, not something to worry about.

Apart from that, you’re also implying that your proposal provides better resistance to front-running, information asymmetry (what do you mean with that?) and core-hoarding. But without giving any more details about the whole market in general, these claims can’t be tested. What can be said is that the design proposed in RFC17 is somewhat resistant to front-running issues (there is a bit of discussion in the gh-issue on that) and core-hoarding is prevented by exerting the price determined on the market to renewers. In addition, I don’t see how “pricing is imposed”, it also arises endogenously through the interaction of bidders.

I don’t mean to discourage you from further developing your idea, but at this stage it appears to be too premature to consider it as an alternative to RFC17. The goal is to act soon and start implementing. Since there is a lot at stake, going for a more traditional approach with an auction-based system also appears to be preferable. If you flesh out your proposal and it stands a thorough review by the community (and it has clear advantages above the current design), it might be interesting to apply it to Kusama as an experiment and gather real-world data there.

3 Likes

Thank you—I appreciate the careful consideration and excellent questions, which really help bridge the theoretical modeling with practical protocol implementation. Let me elaborate clearly on your specific points:

1. Who exactly are the interacting agents?

The interacting agents would primarily be parachains, parathreads, or smart contracts—effectively, any network entity consuming or trading Coretime. While Coretime issuance occurs at the protocol layer, the idea is that price discovery is decentralized, emerging organically through structured pairwise (or small-group) interactions between agents. Thus, pairwise interactions aren’t just abstract modeling—they would be explicitly realized through runtime logic or smart contracts handling these transactions directly.

2. How is demand expressed practically by buyers?

Demand is not expressed through discrete bids or complex curves explicitly. Instead, each agent simply specifies two straightforward things:

  • An initial balance of assets (Coretime and tokens).
  • A single scalar preference parameter (α ∈ [0,1]), representing their relative preference between these two assets.

This scalar simplifies the user experience dramatically, while enabling smooth, continuous price discovery. Preferences inform every interaction deterministically, without iterative bidding rounds, significantly reducing the cognitive burden on agents.

3. What is the actual pricing mechanism?

The pricing mechanism is endogenous and emergent rather than imposed externally, i.e. there are no pre-set price curves or manual adjustments. Instead, each pairwise or local group interaction is solved deterministically as a convex optimization problem, ensuring Pareto-optimal outcomes. The aggregate result of these local interactions converges reliably to a globally coherent equilibrium price—without requiring centralized coordination or periodic adjustment.

4. Assumed network topology and practical feasibility

The model is explicitly topology-agnostic. Convergence is robust across randomized, deterministic, or partially connected network topologies. The cellular automata visualization included earlier is simply a conceptual and intuitive illustration, demonstrating how decentralized preference interactions aggregate to global convergence.
However, a specific deterministic ordering can also be easily implemented—e.g. interactions sequenced by greatest impact on price convergence—enhancing clarity and efficiency with deterministic and verifiable outcomes after the fact of agent’s preferences and asset holding commitments.

5. Could this replace auctions practically, and what would an MVP look like?

Absolutely—this mechanism could serve as a practical replacement or alternative to auction-based Coretime markets, particularly emphasizing decentralization and continuous price formation. A minimal viable version might include:

  • A runtime pallet or smart-contract module capturing each agent’s initial balances and scalar preferences as commitments at a first stage.
  • A deterministic sequencing mechanism prioritizing interactions by convergence efficiency.
  • A convex optimization solver (simple and computationally efficient at transaction level) to determine marginal exchange ratios at each step, gradually producing a stable global equilibrium price.

This approach offers meaningful simplification in implementation compared to auction mechanisms, reducing both governance complexity and strategic uncertainty for participants.

I’m open to further detailing this implementation through simulations or providing more structured outlines to concretely bridge this theoretical framework to actionable protocol design.

1 Like

Just fyi .. We’ll likely add 100s of cores over the next couple years, because the spammening had the no-show blow up set way too high, due to its fix not yet being merged then. We could’ve explosive ecosystem growth, but still not saturate capacity, meaning everyone would pay the reserve price, for a couple years. At present, we merely need the reserve price high enough that nobody griefs us. Auctions should matter in a few years, so important they be solved first, but the reserve price bug was the immediate problem.

1 Like

The coretime model in Polkadot should be fundamentally reconsidered or scrapped entirely, not because of pricing inefficiencies, but due to a deeper structural issue: bad actors are consistently securing coretime ahead of others, regardless of cost. This behavior undermines the fairness and accessibility that the system aims to provide. The problem isn’t market-based; it’s systemic and driven by actors who exploit timing and access advantages to monopolize coretime slots, effectively negating the benefits of open competition.

A better solution is to decouple finalization and execution from strict coretime dependency. Parachains should be empowered to finalize their own blocks independently when they have the capacity and intent to do so. However, they should still retain the option to utilize the relay chain for block finalization when needed. This hybrid approach enhances flexibility, promotes scalability, and prevents unnecessary reliance on a congested or unfairly accessed coretime marketplace.

By allowing parachains to finalize independently and only invoking the relay chain for finalization or shared consensus tasks, we can streamline operations and limit the use of XCM (Cross-Consensus Messaging) calls to scenarios where inter-chain coordination via the relay is truly required. This also enables a more efficient “relay core on demand” model, where resources are used optimally rather than being hoarded or front-run by exploitative actors.

This shift would align more closely with the original vision of decentralized interoperability, ensuring that all parachains, regardless of influence or capital, can participate fairly and efficiently in the network.

1 Like

Hi Jonas, thank you again for your engagement and for reinforcing the importance of urgency and clarity in the path forward.

In parallel to RFC17’s strengths and readiness, I’d like to highlight why an auction-based model—specifically Dutch auctions—may introduce structural challenges in the context of Polkadot Coretime, and how the alternative model I’m proposing aims to address them constructively:

:warning: Structural Disadvantages of Dutch Auctions in Coretime Allocation:

  • Front-running and timing exploitation: Dutch auctions reward speed over intent. Actors with superior infrastructure, information access, or automation can consistently outpace others, leading to unfair early captures of Coretime slots—regardless of actual network need or cooperative behaviour.
  • Price manipulation and strategic hoarding: Well-capitalized entities can secure large allocations early, not necessarily for usage, but to speculate, gatekeep, or delay competitors. This undermines the goals of open participation and efficient resource allocation.
  • Systemic rigidity and power concentration: Dutch auctions tend to entrench competitive asymmetries. Participants with better forecasting capabilities or market influence are structurally advantaged, perpetuating monopolistic dynamics that can distort access over time.

:white_check_mark: What My Proposed Model Offers Instead:

  • Eliminates timing advantages: Prices do not result from who acts first, but emerge deterministically from the interaction of agents’ declared preferences and asset commitments. There is no advantage in racing others to place a bid—only in contributing transparently to the system.
  • Prevents manipulation and hoarding: Because prices arise from Pareto-optimal convex interactions rather than discrete bidding, no single actor can manipulate outcomes or corner the market. Every transaction reflects mutual utility optimization, not strategic dominance.
  • Promotes decentralization and equity by design: The system distributes access as a natural consequence of agent diversity, not through centralized auction scheduling. Convergence to a global price occurs regardless of interaction order or topology, ensuring fairness across all participants—independent of capital, timing, or influence.

This model reflects a shift away from competitive exclusion toward collaborative equilibrium , where price and access are co-produced by the network itself. It aligns with Polkadot’s original vision of permissionless interoperability and offers a structurally fairer alternative to auction-based designs—especially in a multi-agent, decentralized setting.

I agree with you this direction is worth exploring in simulation and, potentially, as a live pilot in environments like Kusama where innovation at the protocol layer is welcomed.

Thanks again for helping push this conversation forward.

I think its important to highlight the struggles the Xode team has had because of Core time, both their Kusama and Polkadot parachains have stopped to produce blocks which is very bad


Last block was 703hours as the time of writing this:

~flipchan

2 Likes

Purchase Coretime Improvement with NFT XCM and secondary market on Hydration

We have a major problem with Coretime’s buying and trading practices. It’s the worst thing I’ve seen or experienced on Polkadot in the last five years! It’s the key feature of Polkadot — shared security — and it’s not working properly. If we don’t solve it quickly and effectively, we could lose more projects and parachains.

It’s completely unusable!

It’s almost impossible to purchase the offered Coretime.

These strange, multi-level auctions are confusing because you never know when to place your bid.

The bidding periods are too short and the waiting periods are too long.

I hate to say it, but Polkadot has completely failed with this service!

I read the Polkadot Wiki.

  • I tried the ‘services’.

I got zero understanding of how it all works, and I’ve been in this ecosystem for five years!

Without a free market and intuitive core trading 24/7/365, it’s all unusable.

It’s overly complicated for no reason, and now someone is taking advantage of this inefficiency.

Use-case Overview

Coretime is already an NFT. It needs to be transferred into a Unique Network NFT XCM-comparable format to make it easier to use and transfer. This will allow NFTs to be traded on any DEX that wants to implement this feature as a secondary market for Coretime NFTs, in addition to the primary market.

Negotiations should be made with Hydration and AssetHub, the new DEX, to implement this technology by default.

This could generate new liquidity and users for Polkadot, DEXes and the Unique Network, as well as attracting attention to the entire ecosystem.

More Details:

3 Likes

The community has rightly emphasized the need for clear usability, equitable access, continuous liquidity, and reliable network operation in the Coretime market. My proposed decentralized convex-preference model is designed to support and strengthen these goals.

By replacing auctions with a continuous, deterministic pricing mechanism, the model enables:

  • Fair access through simple, intuitive scalar preferences (α ∈ [0,1]),
  • Elimination of timing advantages and front-running risks,
  • Built-in resistance to hoarding or market distortion, and
  • Guaranteed convergence to a unique, system-wide price through Pareto-optimal local interactions.

This approach preserves agent autonomy while ensuring a transparent and accessible market for all participants—aligning closely with Polkadot’s decentralization ethos.

Importantly, it leverages the natural heterogeneity of agent preferences, which vary across use cases and network conditions. By allowing these differences to interact locally, the model unlocks the latent efficiency gains from surplus variation—capturing value that would otherwise be lost in rigid, auction-based mechanisms.

In doing so, it lays the foundation for a more robust, adaptive, and user-friendly Coretime economy—built on fairness, transparency, and protocol-level coordination.