Apple’s iOS mobile platform, like its desktop-and-notebook sibling Mac OS X, has garnered a reputation for strong security, sleek design and user friendliness. However, as new holes in iOS's security crop up at an increasing pace, this reputation may be exposed to less-than-friendly fire.
Paul Morris of tech site Redmond Pie jokingly dubbed February “the month of the iOS security bugs,” referring to a glut of newly discovered cracks and exploits. Several such exploits were discovered and reported by Ade Barkah on his tech blog, Peekay, including a method of gaining access to contacts and photos — and even making outgoing FaceTime calls — on a locked iPhone with Voice Dial disabled.
Remarkably, iOS has not been the victim of the regular cyberattacks suffered by Windows or rival mobile platform Google Android, even though teams of benign hackers constantly work to subvert its security in order to “jailbreak” iPhones and iPads.
Give Apple credit
“[The] iPhone has been out now for five years,” said Mikko Hypponen, chief research officer at F-Secure, a digital-security firm headquartered in Helsinki, Finland.
“It’s one of the most popular and most visible devices,” Hypponen pointed out, but added there are still no significant malware threats targeting Apple’s mobile platform. “We really have to give them credit for it.”
But has Apple’s narrow escape so far been based on technology, or on market realities? Steve Santorelli, a team member of Lake Mary, Fla.-based Team Cymru Research, said there's a concrete reason criminals focus on the Windows environment.
More Apple means more malware
When there's limited resources and limited time to develop pieces of malware, said Santorelli, “you’re going to put [them] where you get the maximum install base.”
As more business migrates to mobile devices and Apple’s market share increases, the quantity of mobile- and specifically iOS-targeted malware will rise.
“We’re already seeing the start of it,” said Santorelli, who has previously worked at Microsoft and in Scotland Yard's Computer Crime Unit.
Citing a report that Russian security research firm Kaspersky Lab issued in January, Santorelli offered an example in which 13 different rogue Android apps were traced back to the same botnet, each app seemingly benign but actually designed to compromise the user’s system and data.
With Android already falling prey to hackers, iOS may not be far behind.
“Will there be real malware on iPhones in the future?” Hypponen asked. “Probably.”
But, he added, “I think the new devices like iPhones and Windows Phone 7s are proof that we can learn from our past mistakes,” mistakes, he said, that helped “create devices that are much more secure in real-world use than traditional computers are.”
Apple plays the blame game
Apple’s own past mistakes may lie in its own hubris.
In early March, Google awarded $60,000 to Russian university student Sergey Glazunov for exploiting three undiscovered flaws in the Chrome browser at the Pwnium competition in Vancouver, British Columbia. Despite this blow to the browser’s phenomenal reputation for security, Google knew that those defeats were no black mark; its team took advantage of the revelation to patch the cracks in less than 24 hours.
In contrast, when famed Mac hacker and security researcher Charlie Miller took on iOS security last November by getting his sneaky InstaStock app through the App Store’s tough approval and code-signing process — and subsequently demonstrating the app’s ability to exploit an unsecured browser loophole to take control of an iPhone — Apple’s response was to ban Miller from the iOS Developer program. He's forbidden from creating any iOS apps for a year.
In the same vein, when gadget blog Gizmodo exposed the infamous iMessage bug that let an iPhone capture messages meant for another phone after “marrying” the other phone’s SIM card, Apple deemed it a procedural mistake on the part of an Apple Store employee.
However, it seems that more fluid security — such as effectively “divorcing” the device from the SIM card upon removal, unless otherwise instructed — would render such mistakes harmless.
A failure to accept fallibility is the surest predictor of continued mistakes. Apple is far from the only offender when it comes to reluctance to admit security bugs. A system without bugs is a developer’s dream, but will likely remain a fantasy.
Tim Armstrong of Kaspersky Lab gave an unrelated example when he blogged last month about two serious security problems in Google Wallet, which had been touted as having an impossible-to-crack hardware chip called a Secure Element embedded in Android phones.
“While the Secure Element technology offers a lot of security through encryption of your data,” concluded Armstrong, “if the interface can be beaten, all that math goes to waste.”