Snowfork's Analysis of Sync Committee Security

Co-authored by Aidan Musnitsky

A recent article Altair has no Light Client highlighted flaws in the Altair Light Client protocol, and questioned its suitability for on-chain light clients.

The team at Snowbridge broadly agree with the article and this document serves to dig into the issues further and evaluate their implications for Snowbridge.

Traditionally, light clients are expected to preserve the entire security and trust assumptions from the consensus of the underlying chain. With the ALC protocol, that is clearly not the case - the ALC protocol does not entirely preserve the trust from Layer 1 and does result in a slightly weaker trust model.

At the heart of the problem is the Sync Committee, a group of beacon chain validators selected to provide extra signing duties:

  1. The committee consists of 512 members randomly chosen roughly every day (27 hours)

  2. A quorum is reached when two-thirds of the committee sign an attestation

  3. Members are not slashed for signing fraudulent attestations

  4. Members are penalised an inconsequential amount for inactivity

We should have performed more due diligence when evaluating the ALC protocol for fitness for purpose, as we took for granted the assumption that the ALC protocol would preserve the same security as L1.

Having said that, upon digging deeper, we have found that this weakness in ALC is not something that will materially impact Snowbridge’s trust model and that a light client bridge is still radically more trustless and decentralised than any other solution.

Trust Assumptions

Ethereum

Let’s first consider some basic trust assumptions for Ethereum itself, and their implications:

  • ~67% of validators are honest, ~33% or less are dishonest: Ethereum should be stable and secure

  • ~66% of validators are honest, ~34% are dishonest: A major attack on availability and on consensus becomes possible, including creating 2 separate forks: https://ethereum.org/en/developers/docs/consensus-mechanisms/pos/attack-and-defense/#attackers-with-33-stake. This attack is very expensive though, and so will only make sense if there is a way to profit beyond that expense

  • 50% of validators are honest, 50% are dishonest: The above attack becomes easier, cheaper and more sustainable but is still quite expensive. The attackers can likely force Ethereum to need to defer to off-chain governance for true recovery.

  • < 50% of validators are honest: Sustainable censorship, short-term block re-ordering and major MEV becomes possible and cheap. The dishonest validators cannot be slashed on-chain, though off-chain governance would likely step in to fork and slash them off-chain.

  • <33% of validators are honest: Full censorship and ability to do double spends becomes easy. The dishonest validators cannot be slashed on-chain, though off-chain governance could step in to fork and slash them off-chain.

Next, let’s look at the attacks on ALC, rather than on Ethereum itself, and evaluate what weaker trust model emerges due to them

Possible ALC takeover attacks

For us to rely on an assumption of honesty, any light client bridge needs to take, at the very least, these same trust assumptions and accept these numbers. Even if these hold, we need to defend against several additional attack vectors that may only affect the sync committee validator set rather than the full Ethereum validator set.

Manipulation of RANDAO to gain control of the sync committee selection

The sync committee is formed every epoch by randomly selecting active validators. RANDAO is used as source of on-chain randomness for this activity.

RANDAO is not a perfect source of randomness, it is biasable and predictable to a certain degree. Assigned block proposers are able influence RANDAO by either contributing or withholding their block candidate.

As shown in RANDAO Takeover, ~50% of the stake is required to gain control over RANDAO.

In the event there is a RANDAO takeover, the Ethereum community would likely have bigger things to worry about than fraud in the sync committee, but even so, we need to accept that our trust level for complete bridge manipulation is now down to a 50% honesty assumption, rather than a 66% honesty assumption.

Probability of a sync committee being dominated by a dishonest minority

On average, the membership of the sync committee should mirror that of the whole active validator set. However we need to think of outlier days (epochs) where a random selection results in the membership of the sync committee being dominated a smaller minority stake.

This probability can be approximated using a binomial probability distribution:

  • Total number of trials: 512 (Size of sync-committee)

  • Endpoint: ≥ 342 (number of trials in which enough dishonest validators were successfully selected for membership to take over)

  • Probability of success: The probability that a single dishonest validator will be selected for the sync committee is based on the % of dishonest validators on Ethereum as whole. With 50% dishonest validators, this will be 1/2, with 33% dishonest validators, this will be 1/3.

Plugging these parameters into Wolfram Alpha, we get the probability that a single sync committee will dominated by a dishonest majority in various scenario:

Dishonest Validators Honest or Neutral Validators Chance of sync committee takeover
33% 66% 8.285 Ă— 10-54
40% 60% 1.86 Ă— 10-34
45% 55% 2.4577 Ă— 10-23
50% 50% 1.1814 Ă— 10-14

These are all exceedingly low probabilities. Even with 50% of the full Ethereum validator set being dishonest, attempting a takeover every day (epoch) for 5 years, ie, ~1825 attempts at a takeover, it is almost impossible for a takeover to occur.

Disclaimers:

  • These calculations are just a basic analysis and are approximations. There is a lot more nuance in these systems and calculations, for example, an attack similar to the RANDAO Takeover as described above could be done to bias the RANDAO and slightly reduce the numbers in our calculation, however we are confident that these approximations are practical and that with ~50%+ honest validators, these nuances are likely not relevant.

  • Our team is not mathematics/statistic heavy. This is a basic analysis and an approximation, and although we are confident in the above, it is certainly possible that there is a major flaw in our argument or calculations. We invite others that are more familiar with this to audit and critique our analysis and provide any feedback or suggestions if something major is off.

Part 2 - Practical Security

It’s important to understand how we define trust and honesty in this argument.

For the Ethereum trust assumptions, these are assumptions that are part of the protocol design and the game theory of the protocol. They are based on a model where protocol participants are rational economic actors looking to play the game to maximize their economic return. Trust in this model refers to a game-theoretic idea that one needs to trust that validators will act in their own self-interest and do whatever makes them the most profit. In Ethereum’s case, playing by the rules leads to profits, since if validators break the rules they will lose money, so we trust Ethereum validators based on their own selfish interest.

For the ALC trust assumptions, there are no meaningful slashing conditions, so sync committee members do not stand to lose their stake by committing fraud. In the game-theoretic model, a rational, selfish committee member can commit fraud whenever an opportunity arises without consequence. This means that we cannot rely on game-theoretic trust for ALC, and so our assumption of honesty for ALC is based on real-world assumptions of honesty rather than game-theoretic ones. This is a weaker trust model in terms of game theory, but in practice we don’t believe that it introduces additional meaningful risk. ALC remains secure so long as there is not collusion across over 50% of the total Ethereum validator set.

This section discusses further practical considerations that further mitigate the risk of an Ethereum validator-based attack.

Validator statistics

Broadly, there are three kinds of beacon chain validators:

  1. Solo stakers

  2. Anonymous validators (including whales)

  3. Staking pools

Historically and at the time of writing, over two-thirds of active validators are controlled by a relatively small set of staking pools. The top five according to dune.com:

  1. Staking companies appointed by Lido: 31.36%

  2. Coinbase: 12.69%

  3. Kraken: 6.96%

  4. Binance: 5.79%

  5. Stakefish: 3.27%

The validator set and sync committee have additional skin in the game

Although committee members are not penalized on-chain for misbehavior, participating in fraud carries second-order, off-chain economic consequences.

According to the statistics presented in the previous section, subverting the sync committee would require major staking pools and companies to become compromised or engage in fraud, which could have significant consequences for their business concerns. Even if such fraud does not directly affect Ethereum, it could make staking customers think twice about the security of their deposits. Coincidentally, many of these staking pools also stake on Polkadot, where the effects of their actions would be even greater.

A large portion of the validator set are vulnerable to off-chain slashing, ie, reputation loss, that could affect their corporate treasuries or assets. Any practical analysis of an attack needs to consider these costs and risks in addition to the game-theoretic analysis and on-chain system.

We recognise that this honesty assumption relies largely on Ethereum’s relatively centralised validating power. If most validators were controlled by solo stakers or anonymous whales, there would be no second-order economic consequences for fraud, as those groups would face no repercussions.

Pseudoanonymous validators

Validators are identified by their BLS public key, which is pseudonymous. In case the sync-committee is subverted, there must be a way to identify the organisations behind the attacks. Otherwise, these organisations would not face any second-order economic consequences.

In practice, over two-thirds of the validator set have been de-anonymized through several methods:

  1. Linking depositor addresses to known organisations using blockchain analysis.

  2. Lido, which controls 31% of the Ethereum stake, explicitly links addresses of third-party validators to identities through an on-chain DAO.

  3. Validators selected to act as block proposers can include arbitrary data, known as graffiti, in their blocks. This feature is often used by staking pools to identify the blocks they have authored. Staking pools have external incentives to truthfully and consistently identify the blocks they have authored.

Suppose conspiring validators (including staking pools) decide to commit fraud in the sync committee. They want to hide their fraud by anonymizing their validator addresses that may have already been unmasked through the above techniques. This could be done as follows:

  1. Withdrawing their stake and stop validating

  2. Use Tornado cash or similar approaches to anonymize their ETH

  3. Register as a new validator and deposit their ETH

Cases (1) and (2) will be very visible to the Ethereum community. Users of staking pools will wonder why their staking rewards have stopped coming in and why these pools are no longer performing validator duties. Practically this makes anonymization difficult to pull off successfully, especially for larger staking pools.

Co-ordinating an attack

The above attacks all assume that a dishonest validator set can effectively coordinate enough stake to make an attack, both before sync committee selection, and dynamically on the fly after sync committee selection.

This would not be an easy feat to achieve in practice. We can certainly imagine some set of smart contracts or Dark-DAO style constructions that could facilitate this kind of on-the-fly co-ordination, but these are likely to be quite complicated to build, deploy and get buy in to interact with on the fly after each epoch-selection occurs, especially with a group of anonymous validators.

Such coordination will likely leave detectable fingerprints. The benefit of bridging to Polkadot is that its on-chain governance can react quickly to block brewing attacks.

Part 3 - Mitigations

We believe that the risk of more than 50% of Ethereum’s validators colluding to break the bridge is very low and does not undermine the trustlessness of our bridge. However, some in the community may disagree. Even in such a case, our altair light client could implement various band-aids or augmentations to restore its security.

Avoid ALC by ZK proofs of Casper-FFG consensus

There has been some discussion on developing a ZK light client thats follows Casper-FFG, Ethereum’s PoS consensus protocol.

Nevertheless, this is still an area of research, and is unlikely to result in a production solution this year.

Shield against dishonest sync committee attacks

The unconstrained behavior of sync committee members can result in several classes of attacks:

  1. Equivocation: The sync committee signs a fraudulent header at the same slot number as a valid header

  2. Data withholding: The sync committee is inactive and refuses to sign valid headers

  3. Data withholding in conjunction with fraud: The sync committee withholds signatures on valid headers and instead signs fraudulent headers

There are two different ways in which these attacks can be shielded against

Fishermen

These are permissionless agents which watch the updates being posted to the light client. If they detect equivocation, they can post a fraud proof to the light client.

Polytope Labs is researching solutions for this issue, as documented in https://research.polytope.technology/consensus-proofs. However, while this mechanism is permissionless, it has certain limitations.

  1. The light client will need to introduce a challenge window to give fishermen time to submit fraud proofs. This will delays settlement of bridging activity for users.

  2. Fishermen and relayers will need to put up collateral. This could significantly increase the cost-of-capital needed for the bridge and heavily constrain the economic security of the bridge by the staked amount, and constrain the maximum value of assets that can flow through it securely.

  3. Fishermen would need to use data withholding challenges to detect data withholding, because this cannot be detected directly on-chain. Challenges introduce many new kinds of griefing attacks into the system and radically increase the complexity of game-theoretic analysis on stability of the system. The security model becomes much more dynamic as it now depends on stake value, bridge asset value and slashing costs, all of which may change on the fly, so security becomes much harder to reason about and verify.

Approval Committee

The bridge can be augmented with a trusted additional approval committee (essentially a multi-sig) that must approve any updates posted to the on-chain light client. The responsibility of the committee is to verify these updates against a full Beacon chain node. The committee is only able to approve data, it cannot introduce data on its own.

This mechanism can safely mitigate sync committee attacks, however it does introduce a censorship risk and does mean that committee participants needs to be selected and trusted.

Having said that, it is important to understand the nuance of this approach as opposed to a traditional multisig bridge:

  • In a pure multisig bridge (like Wormhole or Axelar), the multisig and any economics/token/chain related to it need to be trusted to provide availability and protect against censorship, fraud and theft of funds.

  • In this light client + approval committee design, the multisig only needs to be trusted to protect against availability and censorship. The multisig cannot commit fraud or steal funds unless both the multisig and the beacon chain committee are taken over together, at the same time, in a co-ordinated group.

This distinction is significant - the risk for an end user are significantly lower in the latter design, as the multisig cannot steal funds and the bridge can always fall back to Polkadot governance in the event of availability or censorship issues.

Although we don’t think layering either additional shield mechanism on top of the base ALC protocol is needed, we do think that the ALC + Approval Committee design is the most trustless and secure fallback option for end users in the event that our earlier assumptions are invalidated.

Conclusion and Summary

The trust assumptions for ALC and Snowbridge, despite the weaknesses in ALC that have been uncovered, remain very similar to the practical trust assumptions for Ethereum itself. A trust assumption of 50%+ honest validators in practice is similar to the game-theoretic 50%+ honest validator assumption that the bridge had before these ALC weaknesses were discovered. In a world where this kind of collusion can occur, it is far more likely that we would see attacks on Ethereum itself than on the bridge.

We conclude that the ALC weaknesses are likely immaterial for our light client bridge, as well as other light clients. Some other teams developing beacon light clients are in agreement on this matter.

However, if the community disagrees and sees the risk that over 50% of Ethereums validators could collude as significant, we would be willing to consider the above mitigations. We believe that the approval committee solution is the most realistic addition that could actually be shipped reliably, in a reasonable time frame in practice. This would be a smaller piece to add on to the project, although it has various pros and cons, including:

  • :+1: Adds security, as it also brings an extra safety net against bugs in the light client

  • :+1: removes ALC collusion risk

  • :+1: Could be added as an extra piece post-launch, as a response if TVL scales up very high

  • :-1: Will delay the launch date slightly

  • :-1: Adds risk of temporary censorship to the bridge

  • :-1: May be complicated to setup, co-ordinate and automate the committees responsibilities

Our current plan is to continue with the rollout of the launch of the bridge as planned, without any additional committee

6 Likes

Stellar writeup, thank you. I would love it if Prestwich would counter your binomial math [or the “In practice, we know the 1/3”]. If more than 2 people in the community express reservations, adding the Approval Committee AFTER TVL gets to above some threshold $100MM is a wise choice. Do NOT delay to add the Approval Committee!

1 Like

It’s a nice writeup, but arguing on a different level to Prestwich, unfortunately (honest majority assumption vs game theory).

The argument made above is a hypergeometric sampling one, i.e. assuming that the distribution of malicious validators is static, what are the odds of sampling 512 validators, of which 2/3 are malicious? Extremely unlikely, with a large enough set.

Prestwich’s argument is that the distribution of malicious validators is dynamic, i.e. any validator can become malicious given the right incentives. And the incentives for ALC sync committee validators are actually very high. They have no slashing and even if they did, there would be only 32 ETH * 512 * 2/3 or 10923 ETH altogether on the line. This is worth roughly $20m @ $2000/ETH . Sounds like a lot, but most bridges have much higher TVL than that.

To get a bit more practical and discuss why validators would misbehave, it’s clear that MEV is now expanding into consensus-level attacks. See https://www.coindesk.com/business/2023/04/03/ethereum-mev-bot-gets-attacked-for-20m-as-validator-strikes-back/ . I don’t think it’s hard to imagine a future where MEV tools integrated into Ethereum clients can serve as an automated and secretive coordination layer for the sync committee, and get them to produce a quorum on an invalid block that lets them steal from the bridge. I have high conviction that this will eventually happen.

The Approval Committee has the same issue, although it does practically “buy time” because it’s separate from the MEV stack that would enable this attack.

Could you explain what you mean by “invalid block” in “get them to produce quorum an invalid block”? (I’m not an expert arguing with you, just a learner.) When I read @vbuterin’s multichain vs cross-chain comments he contrasts 2 cases:

  1. [Native ETH] “Even if 99% of the hashpower or stake wants to take away your [native] ETH, everyone running a node would just follow the chain with the remaining 1%, because only its blocks follow the protocol rules. If you had 100 ETH, but sold it for 200000 USDC on Uniswap, even if the blockchain gets attacked in some arbitrary crazy way, at the end of the day you still have a sensible outcome - either you keep your 100 ETH or you get your 200000 USDC. The outcome where you get neither (or, for that matter, both ) violates protocol rules and so would not get accepted.” ==> The 1% of honest nodes never follow an “invalid block”.

  2. [Bridged ETH] “Now, imagine what happens if you move 100 ETH onto Polkadot to get 100 Polkadot-WETH (via Snowfork within BridgeHub), and then Ethereum gets 51% attacked. The attacker deposited a bunch of their own ETH into Polkadot-WETH and then reverted that transaction on the Ethereum side as soon as BridgeHub confirmed it. The Polkadot-WETH bridge is now no longer fully backed, and perhaps your 100 Polkadot-WETH is now only worth 60 ETH. Even if there’s a perfect ZK-SNARK-based bridge that fully validates consensus, it’s still vulnerable to theft through 51% attacks like this.” => Again there is no “invalid block”, BUT there is a reverted transaction that causes the value of the Polkadot-WETH to dwindle maybe all the way to 0.

The primary way of “stealing from the bridge” would be via (2), not by generating invalid blocks that are outside protocol rules but by reverting transactions (or censoring), right?

After seeing Polkadot Dispute Storm - The Postmortem (1 validator doing something buggy, but easily could have been > 2/3 of the network) I CAN see how automated MEV tools integrated into Ethereum clients would similarly generate a coordinated “attack” by reverting transactions (or censoring) [but not actually producing “invalid blocks”] and make the probability of attack much much more likely than the astronomically small 10^-54, maybe even > 1/2. Not because the humans running the validators are malicious, but because the scenario where MEV tools would pursue incentives to revert/censor seems SUPER plausible. Or (with the paranoia of all the AI doomsday types) these tools – with only a bit more intelligence – may well conclude that its actually VERY rational to have an Ethereum cartel and “accidentally” kill off any neighboring L1/L2 ecosystem bridge. Almost all human monopolists behave this way, why wouldn’t the MEV tools be programmed to behave in cartel like ways just like them?

So, I changed my mind: it IS important to add the Approval Committee and I urge both Snowfork and t3rn to take @rphmeier fears seriously.

That quote is in the context of 51% attacks, so it is correct in the case that the bridge were based around a full CasperFFG light client. But if we’re considering a bridge that relies on the sync committee (i.e. a bridge that accepts any blocks with sync committee quorum as valid), then the sync committee just needs to create a quorum about an Ethereum block that doesn’t actually exist and has a fake state. This fake state would contain information such as “Account ETH-X is now bridging over 100,000 ETH to Polkadot”. The bridge would then receive the sync committee’s fake block, as well as merkle proofs of the corresponding fake state, and mint 100,000 WETH to dump within Polkadot. No 51% attack needed.

It is true that all bridges are vulnerable to 51% attacks, but given the level of economic security of Ethereum, that would be an acceptable risk, i.e. anything that engages with Ethereum takes on that same risk (exchanges, UIs, full node operators, …)

1 Like

Yipes @rphmeier you really did mean invalid blocks! I didn’t realize the sync committee can (a) totally lie in printing up invalid blocks while also (b) honestly following protocol. The human words “honest”, “collusion”, “corrupted” when combined with random selection lead us to fall for the sophistry of statistics. The cognitive illusion is that it is really hard to coordinate independent humans.

But the real world doesn’t have humans independently corrupted, the real world has software upgrades to take advantage of new revenue. The scenario of a MEV plugin that is outside of protocol being deployed is extremely plausible, ok, so the Approval committee should be a requirement as the TVL goes above like $100K.

So I think we need a explanation of how cross-chain bridges aren’t going to land the Approval committee members in jail like the Tornado Cash devs. At the very least, govts would expect the committee to abide by KYC/AML compliance requirements from some blacklist the same that I imagine Circle CCTP’s attesters are required to do. This compliance mechanism should be taken seriously and the Approval Committee members should be paid well for taking on this very serious responsibility and threat to their physical being.

Hi, I’m Paul, a developer at t3rn, and working closely on the Ethereum light client implementation. Yesterday we published a blog post, sharing our view on this topic, and covering the main topics of this debate. We invite everybody to take a look and share their thoughts.

While this is theoretically possible, we at t3rn find it hard to achieve in practice due to:

  1. The need for a participation rate of over 75% for a realistic chance of success.
  2. The implausibility of all participating members coordinating in secret.

We think this level of coordination is nearly impossible to maintain without detection.

@petscheit The > 75% coordination you find statistically impossible becomes plausible when each participant gets their software auto-updated. The fallacy is:

People don’t update their software in accordance with a binomial distribution.

Because auto-updating software is the norm across every device and software mechanism (even geth/erigon/prysm/…), it should take very little imagination to see how a plugin+auto update mechanism could be very rapidly be socialized / installed and used to conduct ALC-based bridge attacks, without even their owners knowing. It only requires a software update mechanism that is vetted by a minority of people who then cause the majority of nodes to get the update.

So what you need to address is how the mass-software update scenario is impossible. Can you give it a shot?

I don’t think we want an outcome where two groups of people differ on whether they think the above situation plausible vs implausible. We want an outcome where its impossible for software to be auto-updated, or for there to be such severe consequences that this auto-update scenario would result in a mass-slashing event [which is Prestwich’s point].

@Vincent @petscheit Could you

  1. evaluate zkCasper: A SNARK based protocol for verifying Casper FFG Consensus as being usable in your systems
  2. if usable, estimate the level of effort involved
  3. what it means from a bridge user’s experience relative to your current design

We are seeking alternatives, because the idea that we wait until coordination is detected appears reckless: A lot can happen in the hours that something is detected, and in the Polkadot ecosystem, it will take weeks to sort out the consequences, and you may or may not have some some backer to rescue your bridge. Saying “oops, we lacked the imagination to anticipate this, and did the best we could” is sort of reckless ?

zkCasper is still in active research, the next stage will be to implement the packed accountable SNARK for the bw6-767/BLS12-381 curves, then finally the prover/verifier.

With the prover/verifier libraries it should be trivial for anyone to use it in a pallet, as long as they understand that it’ll take up a lot of blockspace.

3 Likes

What you’re describing is a supply chain attack, where validator nodes in the sync committee are somehow corrupted by third-party software dependencies. However this isn’t a problem unique to the ethereum ALC, or even blockchains.

Its simply not the case Its highly improbable that a majority of beacon chain validators in the ALC can be corrupted by MEV middleware, given the current level of ethereum’s client diversity (different validator implementations)

For example, with the Lighthouse and Lodestar implementations, the MEV middleware can only interact with the validator process via a well-defined and constrained Builder API. The MEV middleware can only pass execution layer payloads (normal eth1 blocks) to the validators.

Malicious MEV middleware cannot cause honest validators to sign fraudulent sync committee attestations. Honest validators still have a duty to participate in consensus, which in turn ensures that the execution payloads provided by MEV middleware are valid according to majority of validators. This constrains the kinds of attacks that MEV middleware are able to orchestrate.

That said, Robert is correct in that MEV middleware can be used as a secretive coordination layer for dishonest validators. However, in that case, we’re back to the honesty arguments which myself and @petscheit have described.

1 Like

Hi @sourabhniyogi,
thanks for your quick reply, you’re touching on a couple of interesting points.

Firstly, we are not saying it’s impossible to reach malicious participation of +75%, it certainly isn’t. However, given that there are +550k validators globally, such participation seems implausible to coordinate in secret or at such speed that it would simply happen overnight. If such a trend would become apparent, we could always deactivate the pallet, or utilize attesters. Attesters are staked t3rn network participants that are used for notarization on foreign chains, so that would fit their profile.

Distribution:
As outlined in the blog post, to us any validator having the technological capabilities to collude is seen as malicious. In our opinion, this scenario is binomial.

Automatic Software Updates:
We are not aware of any Beacon chain clients that implement any type of auto-update features. In our opinion that does not follow Ethereum ethos and would be a major risk to the network, as a Github repository would act as a single point of failure.

We also feel like the mass auto-update scenario you outlined, could also apply to Casper FFG. One could picture a scenario, where a modified Beacon client could sign malicious deposit transactions into bridging contracts hosted on Ethereum. These could then be used to drain funds from the bridge on another chain. Once you have 75% (or in that case 67%) of the network malicious, everything is possible.

SNARKs
We are exploring SNARK-based approaches, especially given the development of recursive proof construction. While it is a promising approach, we are currently exploring it more to enable cheap header inclusion proof on Ethereum headers stored in our light client.

3 Likes

Very cool project, was not aware this is being worked on already. I have a bit of a SNARK background myself and am curious to see how you will approach this. Looking forward to seeing how this evolves.

1 Like