"We have successfully extracted attestation keys, which are the primary mechanism used to determine whether code is running under SGX. This allows any hacker to masquerade as genuine SGX hardware, while in fact running code in an exposed manner and peeking into your data. We demonstrate concrete security breaks on real-world software utilizing SGX, such asSecret Network,Phala,Crust, andIntegriTEE."
“[As SGX] memory encryption is deterministic, we are able to build a mapping between encrypted memory and its corresponding unencrypted memory. Although we cannot decrypt arbitrary memory, this encryption oracle is sufficient to break the security of constant-time cryptographic code.”
"WireTap is considered by Intel to be outside the threat model, as SGX offers no protections against physical attacks. Thus, there are no current mitigations besides running servers in secure physical environments. At the time of publication SGX running on Scalable Xeon servers is vulnerable to memory interposition attacks and we expect this will remain the case in the foreseeable future. We also reccomend reviewingIntel’s guidanceon WireTap and BatteringRAM."
This surely is an interesting attack based on a design flaw, but it requires physical access to the server. It isn’t news that substantial SGX vulnerabilities exist in both integrity and confidentiality if physical access of the attacker must be assumed.
This just emphasizes that SGX solutions shouldn’t rely on the security of a single machine alone or reliably limit who can have physical access. This new vulnerability doesn’t actually change anything in the general threat model evaluation of SGX.
IMO SGX still is a security enhancement if applied correctly.
Yes, that’s why it blows up the possibility of permissionless participation.
In other contexts, like banking or permissioned and trusted providers, it is less critical, but reintroduces full trust.
First you cannot assume any confidentiality in multi party computations and you need to check the correctness of them, so the model brings you nothing besides unedeed and costly redundancy?
Another fun attack using this: You work for a shipping or delivery company and run this attack on every CPU being shipped to some known heavy SGX user’s data center, like say Amazon. lol
If you want SGX then you should buy your CPU at some computer store in random cities, and pay in cash. At that point, you could’ve a zkSNARK anonymize your specific CPU, so if someone pre-broke it then they cannot realize which one you’re using.
You do need the node to be identified of course, because otherwise you need some complex process for registering the CPU via a threshold OPRF or whatever, but you could’ve the node be anonymouse identified using a ring VRF. All this last part seems othogonal to the attack of course.
The most recent vulnerabilities also affected Intel TDX and AMD SEV-SNP, used by blockchains like Near, and offered by cloud providers like AWS and google cloud to launch CVM’s (Confidential Virtual Machines).
Seems all of them use deterministic AES-XTS encryption, which doesn’t append random bytes to the encrypted data and can be easily broken. Other blockchain protocols like Near’s Shade Agents rely on CVM technology to create Agents and offer “Confidential IA”.
Seems none of the mainstream TEE technologies available can guarantee privacy simply using chip-to-chip remote attestation.
“Append random bytes” does not describe a fix. It’s lacking MACs that makes these attacks trivial, which requires appending the MAC itself (not random) and often a nonce (sometimes random but specific).
It’s possible the much more limited TEEs in Apple and Google phones have stronger memory protections, and some project is doing phone TEEs, but more advanced attacks should still defeat them. If they use any conventional EC signature in a derandomized way, then even a single bit flip at the right place compromises the entire TEE’s key.
This does not mean secure TEEs would be impossible, but they require more serious hardware, maybe many processors running an MPC within the TEE itself.
It’s lacking MACs that makes these attacks trivial
That’s correct, I understand the motivation, AES-XTS allows the encrypted memory to have the same size as the unencrypted one, so they don’t need to handle page alignment, reserve extra memory for MAC’s, etc… But I agree with the author, the SGX was the right direction, because if they didn’t manage to protect a small isolated piece of software like an SGX enclave, neither can protect an entire VM.
Android, iPhone and MacOS TEE only allows managing secrets inside the TEE, they are safer because the attack surface is smaller, also they are specialized so can afford to have their own dedicated memory inside the TEE (iPhone 5s TEE was defeated anyway), but isn’t like SGX, AMD SEV-SNP and Intel TDX which are only supported in server grade CPU’s, and promises running arbitrary turing-complete applications inside the TEE.
For key management there are good options, but for turing-complete programs, the hope is ArmV9 CCA when it become available in comercial CPUs.
Amusingly if you’d really secure TEEs, including secure against “rewinding”, then you’d obsolete the popular blockchain applications: Why use a blockchain if you can have some trusted hardware that remembers your blanace locally?
You’d need blockchains for coordination ala Tor’s DirAuths and for larger valued accounts that required backups, but all the important logic could run truly p2p witout any servers. lol
I think TEE and Blockchain use-cases don’t necessarly overlap, TEE always requires some level of trust, in the chip manufacture for example who attest the hardware. AFAIK there aren’t good solutions for protect data during processing, SNARKS add an enormous computation cost and is unsuitable for heavy computation, can’t do private LLM inference for example, or prevent DRM protected data from being copied in the local machine, like what WideVine, FairPlay or PlayReady does, HDPC even requires your Display stream to be encrypted to allow you watch F1 races in your laptop while preventing copying videos from the HDMI, there are many use cases where is needed to run some secret black-box trusted program in a remote machine
The history of the various TPM and enclave systems demonstrated poor security. They have been hacked quickly for most of them.
From a user perspective, they are not that terrible. If you take your home computer, there is a lot of parts that are not secure: TPM is not great, BIOS is usually a disaster (you dont know what it is doing and it is doing a lot of things early in the boot process which make it a significant risk), even if the OS is open source you are not sure that you are booting the correct one and verifying that the partition and the kernel are valid is hard if you do not control the lower part of the stack (bios or tpm). TEE is part of of weaknesses but it is far from being the only one. Windows and MacOS are both blackboxes that you also cannot really trust even if one is doing better than the other. What’s most secure? a linux laptop on a not too secure tpm with a unknown bios or a macos laptop with tee and chain of trust from hardware up to the OS? trust the builders or trust the integrated manufacturer? I have seen a BIOS with 300k lines of C code, very few comments and 0 test and that’s from a large manufacturer.
If you go to a cloud VM, then you have more layers of risks and some security layers. People over estimate container security or kvm security. You can make the VM fast (PCI passthrough) but then it is less secure. You can add layers of security. You have 20 components in a server with their own close source firmwares, from memory sticks or hard drive controllers to GPU drivers. In practice you have no idea what’s going on there. Security is good in practice because the cloud provider have a vested interest in keeping things secure and they are fixing the stack smartly and quickly. Good explanations here for secure enclave and secure boot process that my linux laptop does not provide for ex. How much you are exposed to the state level threat is open for debate but it is likely a high risk.
Should we use TEE for blockchain? It all depends on your risk profile. It is better than without. It can be open source (open titan for ex). If you want to be sure, then no and the Polkadot / JAM approach is better. It is expensive to do so (we do a lot of re-computations to provide the guarantee) and depending on what you want to do it can be an option. You could build your own hermetic system similar to Chromebook.
My personal take on this: don’t trust TEEs Intel or otherwise (AMD, NVidia, etc) on computers. On phones, recently some hardware looks significantly better with no public disclosure if issues. Time will tell if they are secure. And last, if you do not trust the TEE, you should also not trust the other parts (bios, hw, kernel, os and so on) which are easier to compromise.
The history of the various TPM and enclave systems demonstrated poor security. They have been hacked quickly for most of them.
@pierreaubert 100% agreed to all those points, I also advocate for safer and more transparent hardware, I’ve put money in company building an Open-Source 10GB/s Router (hardware and software), even them were forced by one component’s Manufacture to sign a NDA otherwise can’t support 10Gb/s speeds, as result part of the source-code and hardware docs must be closed-source, same was imposed to Ledger Secure Element, AFAIK is not possible to build a functional computer using 100% open-source hardware and software, not without big compromises.
IMO the hardest part is not figure out what the ideal world should be, what is hard is figure out what pragmatically is possible to do given the current reality and constraints.
Asahi Linux Team had a lot of trouble to reverse engineer M1’s Secure Enclave Processor and modify ChromeOS Widevine image simply to allow basic stuff like watch Netflix in the browser, notice is not as simple as “there’s an open-source TEE option…” because we still depend on centralized services supporting this hardware, which probably won’t if the user base is irrelevant, or they simply decided agains’t it because saw it as a treat. I don’t think the average user will choose a safer decentralized alternative where he can’t simply watch Netflix.
Should we use TEE for blockchain? It all depends on your risk profile
Isn’t as simple like that, Web3 will not get widely adopted if it cannot support all basic use cases that Web 2.0 does, is not about replacing smart-contracts by TEE programs, is about supporting the cases that Web3 doesn’t have a practical solution yet, such as:
Private AI inference (I don’t like AI either, people will not simply stop using it because of it).
Services/Software that requires “trusted” hardware.