We’ve SGX breaks every 12 months or so, but the lastest SGX break compromised deveice keys. Intel’s response is kinda a joke / bullshit. Also, we’d some SGX projects in the ecosystem, so maybe worth discussing what using SGX securely requires.
Of course, SGX could still be used for minor things, like on-chain games. Can you use SGX securely in serious ways?
I’d contend that SGX can be used securely, but not in the standard SGX threat model. In other words, you cannot just trust that some SGX device signed that it ran this code with this code with this inout-output, because the SGX CPU doing the singing maybe an exfiltrated SGX key.
Instead, you should verify the specific SGX CPU keys doing the signing come from some whitelist of SGX CPU keys for whome you know the owner and the device’s history.
Imagine you’ve some parachain that employes SGX in some capasity, then the parachain should’ve an. on-chain whitelist of the SGX CPU key for each collator, and a signed statment by each colaltor operator that they purchased the CPU new, purchased the CPU relatively anonymously, and that the device never existed in an enviroment where it ran arbitrary code. In particular, the CPU should never have been used in a cloud enviroment.
If you take these steps, then you’re trust model becomes that each collator was unwilling or incapable of exfiltrating their SGX CPU key. That’s still not a great threat model.
Ideally, your parachain logic should enforce that multiple SGX CPUs from multiple collators to sign off on the block, before submission to polkadot. You’re now trusting that at least one of the signing collators were unwilling or incapable of exfiltrating their SGX CPU keys, which becomes reaosnable once enough sgn. You’ve a safety/soundness vs liveness trade off here, and more complex code, but at least this approach provides something like a decentralized threat model.