Next article: Tales from the Sausage Factory: Why Those Big Downloads for Simple Updates?
Previous article: Performance Comparisons of Common Operations, iPhone Edition
Tags: iphone rant
In part two of my potentially ongoing series about the iPhone SDK, I want to discuss the platform's potential for malware.
As I discussed previously, an iPhone (or iPod Touch) will not run any software that has not been signed by Apple. Obviously there are ways around this, but I want to focus on the official firmware. Behind this limitation is Apple's desire to keep bad software off the platform and exert control. A surprising number of third parties have embraced this restriction with open arms.
The driving force behind this seems to be a belief that such a restriction is all that stands between the iPhone and a rampaging horde of evil software just waiting to steal your personal information and turn your telephone into a spam zombie. All you have to do is Google for iphone sdk malware to see how prevalent this attitude is.
I don't think the fear is justified, and I also don't think the restrictions will help nearly as much as people think. But first, it's important to distinguish between different kinds of malware, because there is one kind that code signing may stop and one kind that it will not.
Trojan Horses
The original Trojan Horse was a big wooden statue presented as a gift. The Trojans brought it into their city to celebrate, but it was filled with soldiers waiting for the opportunity to kill everyone. Modern Trojan Horses work pretty much the same way. They present themselves as a shiny app, such as an animated postcard or a game, but behind the scenes they do something naughty.
This is basically the standard case for code signing on the iPhone. Apple, the theory goes, will test all apps before they are approved for the App Store. Any app that behaves badly will be caught and refused. Since you can't get software on the device except through the App Store, you can't get any bad apps. Problem solved.
I'm not sure if things will actually happen this way. Unless the store has an extremely limited selection, the number of apps in the store is going to outstrip Apple's testing resources. A lot of developers have theorized that Apple will do some basic testing but will otherwise rely on their ability to revoke applications after the fact, rather than catching problems up front. Even with extensive testing it would still be easy to hide the evil bits. It's not terribly hard to wrap the evil code in a timer that prevents it from running for a month or two until you're sure that Apple's testing is done. Unlike the Greeks in the Trojan Horse, malicious code won't starve and it can wait as long as it needs to in order to escape detection.
Worms, Viruses, and other Exploits
But the scary scenario isn't Trojan Horses. Remote exploits are scary. In this scenario, you visit a malicious web page and suddenly that page has taken over your phone. Or you receive a specially-formed instant message which breaks into your chat application. Or an attacker simply finds your phone on the internet, sends it a specially crafted packet, and then has the run of the place.
My understanding of code signing as it is currently implemented in OS X is that it will not do anything to prevent these sorts of attacks. This is because, fundamentally, it's difficult to tell the difference between an application which is behaving as it's intended and an application which has been compromised. In both cases, the application receives some data and then performs an action based on it. The only difference is that in the compromise scenario the application is no longer behaving as intended, but how do you (and by "you" I mean a computer operating system) know the author's intent?
In an exploit like this, the attacker takes advantage of a vulnerability in an application which causes it to overwrite some memory, free the same block of memory twice, or something else naughty. It does this in such a way as to cause the application to start doing its bidding, at which point it then gives the app some code and has it transfer control to the newly loaded code. This all happens in memory, long after any signature verification has taken place, so the fact that the newly loaded exploit code is unsigned doesn't matter at all.
This scenario is not merely theoretical. Although I know of no instance in which it was used for evil, this is used on the site jailbreakme.com to jailbreak an iPhone, ironically so that third-party applications can be loaded on it, bypassing the code signature checking.
Jailbreakme.com works by sending the iPhone a specially crafted TIFF image. This image exploits the image decoding software, allowing the site to execute arbitrary code on the user's device. In this case it's entirely benign; the TIFF is used to allow third-party software and patch the exploit it used to gain access in the first place. But it could just as easily start snooping for credit card numbers or offloading the contents of your address book.
The exploit used by that site is now fixed, so it only works on older firmware revisions. But exploits of one kind or another are virtually guaranteed to exist in any firmware revision. It's said that the only truly secure computer is one which is disconnected from the network, turned off, encased in solid concrete, and sunk to the bottom of the ocean. And even then, you never know when the CIA might go dig it up.
This is the scary kind of malware. It's scary because it strikes when you should be safe. Surfing the web or receiving messages should never result in getting your phone hijacked by some guy in Nigeria, whereas when you install a weird untrusted application you should at least have some idea that you may be getting more than you bargained for. Code signing does nothing to stop this kind of attack and only limits it slightly, in that it's stuck running within the host app rather than writing out a daemon that can keep going behind the scenes.
Cheer Up!
Despite all the doom and gloom above, I don't think the platform is in any particular danger. It's no more vulnerable than any other computer operating system, and much better than some. The only platform currently in existence which really plays host to a great deal of malware is Windows, and that's mainly because it's really, terribly bad at this sort of thing. But while the iPhone code signing requirements may well raise the average quality of the available applications, it's at the cost of considerable freedom. Apple needs to at least make this optional, even if it's on by default. Switching it off isn't going to suddenly cause bad things to happen, much less turn the platform into a "haven for malware" as many seem to fear.
I'll take care of protecting myself from evil, just let me decide what I want to put on my hardware.
Comments:
Trojans: These are pretty common on the Windows side. They tend to be passive, but get spread because they give the user enough gratification that they want to show their friends. It gets spread around by e-mail, generally, infecting people as it goes. A nastier variant on this will take it upon itself to e-mail your friends for you once you run it, speeding the process along. It still requires manual intervention to run it after it's received. Requiring code to be signed by Apple and obtained through the App Store will pretty much stop this. If it's possible to sneak a trojan through Apple's checks then this will show up as people e-telling each other to download these apps because they have cute dancing gnomes or whatever the hook is.
Passive exploits: This is the jailbreakme.com approach. You leave some maliciously crafted data around where a user might try to load it in a program which can be exploited. This can't spread at all except for gray areas like exploiting communications apps which let you talk to your friends, like mail, chat, twitter, and other such services. Code signing does nothing against this because, as noted in the post, you're just taking over a trusted app rather than trying to load a new, untrusted app.
Active exploits: This is where you find a vulnerability on a machine which can be exploited directly over the internet. Many famous Windows worms operated this way, and they can be tremendously destructive. The iPhone should be quite secure if it's built properly. A quick scan of my iPod Touch using nmap shows no open ports, as it should be. Hopefully no such exploits will be found. Again, code signing doesn't help because you're taking over a trusted app.
Assuming that there are no active exploits discovered, the scariest scenario is probably where an exploit is found in Mail, or in a popular chat program. This is a very real possibility and code signing, as far as I know, won't help against it.
Casey: Good to see you around here. The question of holding Apple liable is an interesting one, and I can certainly see it happening. For the record I have no objection to Apple vetting and restricting the behavior of apps available through the official App Store. All I want is a way to bypass the app store, both for my own personal projects and so I can use software that Apple doesn't approve of. Since Apple has yet to be sued (to my knowledge!) for any of the horrible software which exists on the Mac, I can't imagine anyone would try to hold them liable for third-party software obtained and loaded directly without passing through Apple.
No idea if the iPhone has this support (and if it does whether it will make use of it in 2.0), but if yes, signature verification is not restricted to initial code loading time as you are suggesting.
If all application code and all the OS libray code is signed and has the kill option set, a large class of attacks become impossible as the OS can enforce that all code executed in userspace needs a valid signature at runtime, i.e. if a sucessful code injection into an app were to take place e.g. via an image format decoding bug, the app would die upon jumping into the injected code.
Of course this would not prevent running harmful code that _is_ signed, like the Trojans you mention...
Fine grained privileges is an excellent idea and I look forward to seeing it not just on the iPhone but on my regular computers as well. There's no reason a web browser should be able to destroy the contents of my home directory or read my private documents, and if access controls are in place so that it can't, the sort of exploit that jailbreakme.com uses will become much less destructive.
If such a thing is enabled, then the options are reduced to convincing the code to do things for you without handing it any new machine code. Any application which contains an interpreter may be vulnerable to this sort of thing, and attacks like SQL injection would still be possible, but it would certainly limit the options for mayhem.
Comments RSS feed for this page
Add your thoughts, post a comment:
Spam and off-topic posts will be deleted without notice. Culprits may be publicly humiliated at my sole discretion.