Do you need more block history in the ink env?

,

On the frontend in polkadot js, we can use api.at(block_num) to rewind the blockchain back by up to 256 blocks (as these are stored on the RPC nodes) and essentially run functions on recent history, as if we were at that block.

It would be nice if ink could do the same, especially considering the ink computation is being carried out on the same node containing those last 256 blocks (I believe).

This would be very handy for time sensitive contracts that require recent history access. We have a contract which fits that case, and we’re currently storing historical data manually across a recent block window of 100 blocks. This adds significant bloat to the contract size and storage. If we had a method like .at() on the frontend available in ink, this problem resolve very nicely and make our contract greatly more efficient.

Does anyone else need this functionality? Interested to know your thoughts :slight_smile:

I think it would be great if you could provide an example use-case for this. What kind of historic data would you like to query and why.

It’s worth noting, this would need to be implemented on pallet contracts level, ink! is only the language. Also I think it’s not even possible to implement it actually, because the storage available to a runtime execution is from a particular block. It doesn’t have access to the whole DB.

Hi, I work with @christ

The use case for this is anything requiring recent data to conduct business logic.

This situation arises anytime someone needs to store time series data on chain and use it in the contract’s business logic.

For example, we’re storing the history of external events in our contract. Each event has a pass/fail attribute alongside a block number, creating a time series. We’re then using this data to determine the quality of the user (lots of passes is good, lots of fails are bad) via recency of the events, in addition to whether further events are required due to lack of recent data.

A more simplistic use case would be rate limiting. E.g. go back 1…N blocks to check if a user met a certain criteria, error if the user does not meet this criteria.

Anytime you’re dealing with data over time this situation of historic data occurs. Given the contract is running on a blockchain acting as a ledger, it lends itself nicely to storing this data over time. Now consider your data over time is only required up to 256 blocks in the past and you arrive at our position. It would be a very neat solution to a problem which is wider spread than it may seem.

I agree that this would have to be implemented at a pallet level and exposed to ink via the contracts pallet. I am not familiar enough with the storage on the runtime but would be keen to chat further on this. I was under the impression that given a node has the last 256 blocks it would be able to access any of them and thus the state of a contract’s storage at any block

If you need access to historic data your contract generated, you can just store it in the contract storage.

I was under the impression that given a node has the last 256 blocks

Kind of yes, kind of no. Under default config, each substrate node stores all blocks from genesis. 256 (in the standard config) is the pruning window, outside which the historical state is inaccessible. This is very similar to how it works in other chains.

However, the fact that this state is available in the DB doesn’t mean you can access it everywhere. Runtime (so in particular the pallet contracts) is the state transition function, which turns old_state into new_state. In particular it is given access only to the current state, which is a fundamental design choice in substrate.

If you need access to historic data your contract generated, you can just store it in the contract storage.

But then the data will be replicated many times throughout the chain and fees will increase for dapp operators due to increased StorageDeposit.

256 (in the standard config) is the pruning window, outside which the historical state is inaccessible.

If the state is there, why not make it available? This would avoid contract developers from developing n different ways to duplicate the same data, bloating the chain.

But then the data will be replicated many times throughout the chain and fees will increase for dapp operators due to increased StorageDeposit.

It is not really replicated. Historical state is treated differently than state. Accessing historical state from contracts is impossible in most chains because that would make pruning impossible.

If the state is there, why not make it available? This would avoid contract developers from developing n different ways to duplicate the same data, bloating the chain.

See above. But again, you are essentially asking why do we do pruning. We do it, to reduce disk requirements.

Then I guess you could ask – ok pruning is fine, but why don’t we have access to the last 256, unpruned block states. There are a few reasons, I guess, although I’m not a substrate core dev, so treat it with a grain of salt:

  1. Clean abstraction – runtime takes a single state as input, not a sequence of 256 states – that would not be clean.
  2. Having access to N last states and not ALL states is a weird constraint that would surely cause developers make stupid mistakes.

Thanks for your thoughts.

We’re not talking about making all historical state available, only the state that we know will definitely exist on the nodes during smart contract execution. I agree that its impossible to make state older than 256 blocks accessible, as it doesn’t exist. If pruning size was set to 1, for some unknown reason, there would be no historic state available in the smart contract.

  1. Clean abstraction – runtime takes a single state as input, not a sequence of 256 states – that would not be clean.

The Runtime could continue to take a single state as input and also take a separate variable called temp_history that contained up to 255 (pruning_size-1) other states. Or it could presumably be implemented in many other ways that would ensure clean code.

  1. Having access to N last states and not ALL states is a weird constraint that would surely cause developers make stupid mistakes.

I agree, this could be confusing for newer smart contract developers. There would need to be very clear error messaging from the ink env if the historic state didn’t exist.

What you are asking essentially is to rewrite substrate from scratch now. I think it’s best to ask substrate core devs about that, but I doubt this will happen.

Also, I don’t see any reasonable use case for this that would justify introducing this complexity.

What about on-chain verification of inclusion and storage proofs using a smart contract?

@DamianStraszak thanks for your input on this. I think we need a core-dev to weigh in now on whether this is doable on substrate and worth the time/effort vs reward.

I still think this would be a good feature because the 256 blocks are stored anyway, so it makes sense to use all the data we’ve got on a node. However, I appreciate large core changes to substrate are rocket science.

Technically, accessing historical ink! contract state is possible, since contract state is still part of the whole state at a given block. So I think, we can still access contract state at a given block if the full state at a block is still around and has not being pruned.

Under the hood, if you call contract.query.get to call the get method on an ink! contract. The client (pjs or dedot) is calling an RPC state_call to the contractsApi.call runtime api. The state_call accepts a hash param to allow us customize the block we want to make the state call. So if somehow we can pass a historal block hash into the contract.query.get then we can query the state at that specific block.

I was able to do that with pjs with some trick to bypass the ContractPromise check api instance to access the historal state of a flipper contract, something like this:

const api = await ApiPromise.create(...)
const apiAt = await api.at(...)

// Some trick to bypass the api instance check of `ContractPromise`
// @ts-ignore
apiAt.isConnected = true
// @ts-ignore
apiAt.tx.contracts = { instantiateWithCode: () => {} }
// @ts-ignore
apiAt.tx.contracts.instantiateWithCode.meta = { args: Array(6).fill(0) }

// Initialize the contract instance using the apiAt
const contract = new ContractPromise(apiAt as unknown as ApiPromise, flipperAbi, 'contract address');
// now you can call 
console.log(await contract.query.get(contractAddress, { gasLimit, storageDepositLimit }));

While this is doable with the legacy JSON RPC api, this might probably not the case with the new JSON-RPC spec, especially for the light client for now. So with the new JSON-RPC spec, light client currently only support the chainHead_ prefixed api to access the state, so that means light client only interested in the head of the chain, and we cannot access historical block/state with light client (we technically do if some historical blocks is still pinned). For RPC nodes, I think we can access history block/state via the archive-prefixed if they supports, but this force dapps to use RPC nodes which is not ideal.

@sinzii thanks for your input! This is exactly what I was imagining, if pjs can get the state at a given block then the contract should be able to as well.

I agree with your observation regarding pruning, and that is the contingent for this problem. I had not considered light clients, and their usage of only the current block would certainly pose a problem for this idea. I am not that knowledgeable about light clients - are light clients intended to conduct contract queries and transactions?

If light clients are to be executing contracts then I think this idea is blocked. Light clients by their nature must have a small / zero block history to keep them light. However, the value of light clients to distribute rpc load is more valuable that this idea, so I think this is the best arrangement.

Hi @goastler, It would be nice, but I don’t think what you are asking is doable (read past state of a contract when executing it).

  • Each node is free to configure the pruning threshold the way they want, so not all node would be able to build your block and you would run into consensus issue
  • In a parachain setup, how would a validator validate your block? Validators don’t even have access to the state, all they can do is read it from the PoV and I don’t see how you could include that piece of data into the proof
1 Like

A possible way to achieve this is by storing the last N relay state roots in the parachain node and making them accessible to smart contracts. This approach allows the smart contracts to verify proofs of past states up to N blocks.

But I don’t think this functionality should be implemented in the contracts pallet…