Next article: Friday Q&A 2016-03-04: Swift Asserts
Previous article: Friday Q&A 2016-01-29: Swift Struct Storage
Tags: apple fridayqna security
The big tech news this week is that the FBI is trying to force Apple to unlock a suspect's iPhone. One of the interesting points around this story is that the iPhone in question is an older one, an iPhone 5c. Newer phones have what Apple calls the Secure Enclave, which some say protects against requests of this nature; even if Apple wanted to break into your phone, they couldn't. Which then brings up an interesting question I've seen a lot of people asking: what exactly is the Secure Enclave, and what role does it play here?
A quick note before I get started: my usual approach to writing articles is to dig all the way down to the bits and bytes and then discuss what's there. This is necessarily somewhat different. By its very nature, the Secure Enclave is inaccessible to mere mortals like myself. Instead, most of the knowledge here comes from the information in Apple's iOS Security Guide, plus some general theory. The intent is to extract the relevant bits from that guide, explain them, and think through the implications. This article must assume Apple's information is accurate, since there's no practical way to check from the outside. The result will only be as good as the product of the guide's accuracy and my own understanding, so reader beware.
Also, this is intended to examine the technical aspects of this case and the Secure Enclave technology. No opinions on the merits of the FBI's request or Apple's response or any other political matters are stated or implied. If you want to discuss the political aspects of this case, there are many other places where you can do so.
With that out of the way, let's get started.
Review
The iPhone in question is protected by a passcode, which isn't stored anywhere on the device. The only way to get in is to brute-force the passcode. The computation needed to verify a passcode is deliberately slow, requiring about 80 milliseconds per attempt. Still, brute force cracking is feasible. For a four-digit passcode, trying 10,000 combinations at 80 milliseconds each would take less than 15 minutes. A six-digit passcode would take about a day.
This means that passcodes are not terribly secure. Apple mitigates this with additional delays after too many failed attempts. After a few failed attempts, the iPhone will make you wait before you can try again, starting with a one-minute delay, then a five-minute delay, and beyond. This makes a brute force attack impractical.
One might think that you could work around this if you pulled the flash memory out of the iPhone, copied its contents, then tried to crack it on a fast computer. You wouldn't have any software imposing additional delays. As a bonus, the 80 milliseconds needed for each attempt could go a lot faster with a faster computer, and you could run many attempts in parallel. However, this doesn't work. The data encryption is tied to the hardware, so the brute force attack must be run on the original device.
On the older iPhones without the Secure Enclave, there is a weakness in this system. The escalating delays prevent brute force cracking of the passcode, but these delays are just a feature of the phone's OS. The 80 millisecond key derivation is an inherent property of that computation, but the additional minutes or hours delay after too many failed attempts is just the OS code refusing to accept additional input until some time has passed. The FBI wants Apple to build and install a special OS version that doesn't enforce these delays and which allows passcodes to be submitted electronically. This would allow the FBI to crack the passcode within a few minutes or hours. iPhones won't accept OS updates from anyone besides Apple, so this system is secure from outside attackers, but it's not secure against Apple itself.
Note that this is based on the user using a numeric passcode. A good complex password is still secure even on these older iPhones. An eight-character alphanumeric password would take about 550,000 years to try all possible combinations, for example.
Unreadable UIDs
Each iOS CPU is built with a 256-bit unique identifier or UID. This UID is burned into the hardware and not stored anywhere else. The UID is not only inaccessible to the outside world, but it's inaccessible even to the software running at the highest privilege levels on the CPU. Instead, the CPU contains a hardware AES encryption engine, and the only way the UID can be accessed by the hardware is by loading it into the AES engine as a key and using it to encrypt or decrypt data.
Apple uses this hardware to entangle the user's passcode with the device. By setting the device's UID as the AES key and then encrypting the passcode, the result is a random-looking bunch of data which can't be recreated anywhere else, since it depends on both the user's passcode and the secret, unreadable, device-specific UID. This process is repeated over many rounds using the PBKDF2 function, feeding each round's output back into the next round's input, performing the heavy computation needed to force 80 milliseconds of work for each passcode verification attempt.
Secure Enclave
The Secure Enclave was introduced with Apple's A7 system on a chip. All iPhones starting with the 5s have it. The 5/5c and below do not. On the iPad side, everything starting from the iPad Mini 2 and the iPad Air have it.
The Secure Enclave is a separate CPU within the A7 (or later) that's responsible for low-level cryptographic operations. It doesn't run iOS or anything resembling iOS, but instead runs a modified L4 microkernel. L4 is intended to run as little code as possible in the kernel, which should theoretically make the system more secure by reducing the amount of potentially buggy code running with elevated privileges. The Secure Enclave uses a secure boot system to ensure that it the code it runs can't be modified, and it uses encrypted memory to ensure that the rest of the system can't read or tamper with its data. This effectively forms a little computer within the computer that's difficult to attack.
The Secure Enclave contains its own UID and hardware AES engine. The passcode verification process takes place here, separated from the rest of the system. The Secure Enclave also handles Touch ID fingerprint processing and matching, and authorizing payments for Apple Pay.
The Secure Enclave performs all key management for encrypted files. File encryption applies to nearly all user data. Most system apps use it, and third party apps all use it by default if running on iOS 7 or later. Each encrypted file has a unique key, which is in turn encrypted with another key derived from the device UID and the user's passcode. The main CPU can't read encrypted files on its own. It must request the file's keys from the Secure Enclave, which in turn is unable to provide them without the user's passcode.
The escalating delays for failed passcode attempts are enforced by the Secure Enclave. The main CPU merely submits passcodes and receives the results. The Secure Enclave performs the checks, and if there have been too many failures it will delay peforming those checks. The main CPU can't speed things along.
Implications
What does the Secure Enclave mean for the security of the system as a whole?
On most systems, if you can get into the OS kernel then you own the entire system. The kernel can do anything. It can read and write every byte of system memory, it can control all of the hardware, and it's in charge of all of the application code the system runs, which it can subvert at will.
Since the Secure Enclave is a separate CPU mostly cut off from the rest of the system, it isn't under the kernel's control. On an older iPhone, owning the kernel means owning everything done by the system, including the passcode verification process. With the Secure Enclave, no matter who is in control of the main CPU, no matter what code is in the OS running on it, the basic security functions remain intact.
This system essentially allows arbitrary code to be placed in front of cryptographic functions such that this arbitrary code can't be bypassed. It's a bit like a super-sized version of the 80 millisecond computation time for password attempts. That delay is enforced by using a calculation that inherently takes that much time. This means it can't be bypassed, but there are limits on what you can create from the inherent limitations of calculations. For example, you can't add a one-minute delay to the fifth attempt, because raw cryptographic constructs don't have a concept of "fifth attempt." With the Secure Enclave, that one-minute delay can be enforced, since even with the rest of the system subverted, the delay code in the Secure Enclave remains intact.
Software Updates
The iPhone 5c (and other pre-A7 iPhones) can be subjected to a brute force attack by creating a new OS without the artificial delays, loading that onto the device, and then testing passcodes as fast as the hardware can compute. The Secure Enclave prevents this. But what about carrying out the same kind of attack one level further down, by loading new software into the Secure Enclave which eliminates its artificial delays?
Apple's guide contains this discussion of software updates for the Secure Enclave:
"It utilizes its own secure boot and personalized software update separate from the application processor."
That's it! There are no details whatsoever. What is the actual situation? Here, we must enter the realm of speculation, because as far as I can dig up there is no information out there about how Secure Enclave software updates actually work. I see two possibilities.
The first possibility is that the Secure Enclave uses the same sort of software update mechanism as the rest of the device. That is, updates must be signed by Apple, but can be freely applied. This would make the Secure Enclave useless against an attack by Apple itself, since Apple could just create new Secure Enclave software that removes the limitations. The Secure Enclave would still be a useful feature, helping to protect the user if the main OS is exploited by a third party, but it would be irrelevant to the question whether Apple can break into its own devices.
The second possibility is that the Secure Enclave's software update mechanism does something more sophisticated to protect against an attack even from Apple. The whole idea of the Secure Enclave is that it enforces additional rules that can't be bypassed from the outside. This could include additional rules about its own software updates. Given the goal of protecting the user's data, it would make a lot of sense for the Secure Enclave to refuse to apply any software update unless the device has already been unlocked with the user's passcode. For a case where the user has forgotten the passcode and wants to wipe the device and start over, the Secure Enclave could allow updates but delete the master keys protecting the user's data.
Which one is true? For now, we don't know. Apple put in a lot of effort to protect user data, and it would make a lot of sense for them to take the second approach, where updates wipe the device if applied without the user's passcode. This would be fairly easy to implement, and shouldn't affect the usability of the device. Given Apple's public stance on user privacy, I would find it extremely weird if it the Secure Enclave's software update mechanism wasn't implemented in this way. On the other hand, Tim Cook's public letter implies that all iPhone models are potentially vulnerable, so perhaps they haven't taken this extra step.
When it comes to the matter of law enforcement forcing Apple to attack an iOS device, this is the key question. If Secure Enclave updates are secured even against Apple, then the FBI's ability to make these requests stops at the iPhone 5s. If they're not, then even the latest 6s could potentially be attacked. I am deeply interested in learning the answer to this question.
Conclusion
The Secure Enclave adds an additional line of defense against attacks by implementing core security and cryptography features in a separate CPU within Apple's hardware. This separate CPU runs special software and is walled off from the rest of the system, placing it outside the control of the main OS, including the kernel. The Secure Enclave implements device passcode verification, file encryption, Touch ID recognition, and Apple Pay, and enforces security restrictions such as the escalating delays applied after excessive incorrect passcode attempts.
The iPhone 5c that the FBI is asking Apple to break into predates the Secure Enclave, and so can be subverted by installing a new OS signed by Apple that removes the artificial passcode delays. Whether newer phones can be similarly subverted depends on how the Secure Enclave's software update mechanism is implemented. If software updates erase the master encryption keys when installed without the passcode, then newer iPhones can't be attacked in this way, even by Apple. If updates are allowed without the passcode and without erasing keys, then the Secure Enclave can potentially be subverted just as older iPhones can be. As far as I'm able to determine, whether this is the case remains an open question.
That's it for today! Come back again for more exciting adventures, probably with fewer inaccessible and opaque CPUs. Friday Q&A is driven by reader ideas, so if you have something you'd like to see covered here, please send it in!
Comments:
From what I've read, Android has nothing like this. If someone gets your device and they know what you're doing, they get everything.
There was a report a while back when some hacker-for-hire company had its data leaked. They said they could hack any Android or jailbroken iOS device ... but not a standard iOS device.
Apart from the size, one interesting security feature of the seL4 implementation (which I assume Apple are using) is that the code has been formally proven correct.
ech: That sort of hardware stuff is way outside my expertise, but speculation is fun, so.... I'd guess that it is absolutely possible, but the question is how much money it would cost. This is definitely not a weekend project for a couple of guys at the lab. You'd have to remove the packaging from the chip without destroying the guts, scan the chip to figure out where the UID is stored, then scan that area with an electron microscope. The individual components are 28 nanometers wide on the A7, and even smaller on newer ones, so everything is delicate and tiny. I wouldn't be surprised if the part that encodes the UID is somehow built to make it difficult to remove the packaging without destroying it.
There's some discussion about it here:
https://www.theiphonewiki.com/wiki/GID_Key
That's mostly about the GID key, not the UID key, but the principle is the same. The GID key is another key baked into the hardware in a way that makes it difficult or impossible to retrieve, like the UID, but the GID is shared across all devices of a given model.
asdf: Apple's paper says they're using L4 but doesn't mention seL4. I have no idea if that's because they're not using seL4 or if that's just loose terminology.
dkasper: I saw that, and it's pretty convincing. I think there's still some room for key erasure when applying updates given that information. For example, the Secure Enclave could store a secure checksum of its firmware (again, assuming the SE has any of its own storage at all) and the hardcoded secure boot sequence could erase the master key (again, assuming there's storage in the SE) if the checksum doesn't match but the firmware still has a valid signature. But this is just a story I'm making up about how it could work, it says nothing about how it actually does. Right now I'm leaning towards the idea that it doesn't erase anything on update, but I'm still not sure.
The premise of this security design is that these 256-bit UIDs are truly random (and nobody keeps a list that matches them to the phone's S/N).
Would it be possible to (indirectly) test their randomness by setting the passcode to e.g. "0000", encrypting some specific data with the hardware AES engine and then collect the encrypted data from many iPhones and check for duplicates? I, for one, would participate in such a test if there was an app for that.
Of course, if there are no duplicates the above mentioned list could still exist.
There was actually an example of something like this from about a year ago:
https://www.intego.com/mac-security-blog/iphone-pin-pass-code/
This takes advantage of a vulnerability where the phone didn't write out the information about the failed attempt right away, so by cutting power at just the right moment, it bypassed the artificial delay. This vulnerability has since been fixed.
delay: As you can see, a vulnerability like you describe did exist. I imagine they've fixed it now. Apple's guide actually mentions this briefly:
I imagine it writes out information about the failure before reporting it back to the main CPU, so you can't know to cut the power until it's too late.
kaw: I'm not sure if you can get access to the AES engine normally. I'd assume you could access the one in the main CPU if you jailbroke. I don't think you could access the one in the Secure Eclave at all (barring a vulnerability in its code).
I'd be surprised if there were any duplicates. Even with bad randomness, it's more likely that they're all different, but more predictable. For an extreme case, imagine if the UIDs were just a counter starting from some random value. Totally predictable, but all unique, and impossible to detect from the outside.
And as you say, they could be random but the manufacturer could keep a list of all of them.
I'd love to know exactly how the UIDs are generated and written into the chips. I'd hope it uses some sort of inherently random physical process which doesn't require the information to exist outside the chip, but I really don't know. Because of it's very nature, we have to trust Apple and their chip suppliers that it works the way they say.
It also needs enough memory to keep its own RTC, to prevent replays of saved RTC values interfering with things like password intervals and cert expirations.
This is all in a regime where you trust the Flash and RAM that's connected. It's completely possible that if the Flash (or the intervening kernel and APIs) are compromised, when the Enclave demands a key be deleted from the FS, the kernel simply lies to the Enclave, saying it does when it really doesn't. On an Apple device though the Boot firmware and the hardware configuration are signed by an Apply public key cert, and the Secure Enclave needs to do this validation (I think!), and it has to do it at a stage in boot when the Flash isn't available. (This is related to the whole Error 53 business. If the security infrastructure thinks a piece of its trusted hardware has been fiddled with during boot, it dumps out.)
- Not deleting keys: security failure, makes brute force attack viable.
- Deleting keys: anyone can wipe your device.
Possibly, the first option is better for anyone who needs to hide some info, but the second one is worse for most of the people.
They could prevent wiping the device without the passcode, but then you've made it so that forgetting the passcode permanently bricks your device.
Let's say they have a good deal of confidence in their secure software, but not enough to make it automatically wipe your phone when updating. Why not just, when there is a secure enclave update, make the person authenticate, then decrypt their drive with the old software (perhaps writing it to the drive encrypted by the method that the 5c uses), then update, then reencrypt with the new? Sure, it will take a while, but it will not be too disruptive and it won't compromise security except while it is actually happening... and, given apps that can hold the iPhone unlocked for as long as they want, and given the apparently-quite-secure way the iPhone has of erasing flash after its data is deleted, I don't see this as inherently posing a security issue that isn't there already.
I don't understand what problem your decrypt-then-reencrypt procedure is solving. If the user enters the passcode then the Secure Enclave can just update without wiping anything in the first place. The question is what happens if you try to update it without the passcode.
There has to be a way to erase the phone and set a new passcode without the passcode, but I don't see why there has to be a way to update the secure enclave software while leaving the data intact without the passcode. Obviously you do: what is it?
(This is why my response made no sense to you... obviously, I was completely mistaking your argument in the first place.)
anurag: Yes, that's the sort of thing you'd have to do. It's hard to estimate the difficulty, Apple's chips could be much harder to get into than that one, or much easier. In this particular case there's also the problem that you need to break into one particular device, not just any random iPhone. That means that your technique needs to be pretty reliable, because if you break the chip while trying to get into it, you're screwed.
"The Secure Enclave also handles Touch ID fingerprint processing and matching, and authorizing payments for Apple Pay."
But it is the Secure Element not the Secure Enclave that is used for authorizing payments for Apple Pay.
So the statement is at least to broad.
"On iPhone and iPad, the Secure Enclave manages the authentication process and enables a payment transaction to proceed."
Both the Secure Element and the Secure Enclave participate.
Kamil Borzym: That's an interesting question. Perhaps TrustZone wasn't available early enough and Apple went their own way. Or perhaps it's ultimately less isolated from the rest of the system. I'm not familiar enough with it to say.
Secure Element: The Secure Element is an industry-standard, certified chip running the Java Card platform, which is compliant with financial industry requirements for electronic payments.
Secure Enclave: On iPhone and iPad, the Secure Enclave manages the authentication process and enables a payment transaction to proceed. It stores fingerprint data for Touch ID.
No one really knows if Apple can bypass it, because Apple's statements about this are not clear.
However, it's good to know that I'm safe, at least from third party users.
It sounds very secure, the combination of:
UID + PASSCODE + SALT and the timers on failed attempts.
Comments RSS feed for this page
Add your thoughts, post a comment:
Spam and off-topic posts will be deleted without notice. Culprits may be publicly humiliated at my sole discretion.
Does the Secure Enclave contain some sort of secure non-volatile memory? If not, I'm not sure how it can delete something in a verifiable way.
I guess you could make the passcode hash dependent on both the passcode itself, the internal secure secret key, and some property of the current OS (eg: a checksum). Then when you do a software update, the Secure Enclave would have to recalculate the passcode hash...