By
updated 7/18/2012 5:24:22 PM ET 2012-07-18T21:24:22

Google has beefed up security in the latest version of Android, making it more difficult for malicious hackers to exploit vulnerabilities in the mobile operating system.

Android 4.1 Jelly Bean uses full address space layout randomization (ASLR), explained security researcher Jon Oberheide in a blog posting Monday (July 16).

ASLR is a form of memory protection that makes it impossible for a hacker or malware to know exactly where in a smartphone's memory the parts of a running application or process "live." 

Go big or go home

Android 4.0 Ice Cream Sandwich only partly implemented ASLR, which was sort of pointless.

"As long as there's anything that's not randomized, then it doesn't work, because as long as the attacker knows something is in the same spot, they can use that to break out of everything else," Apple hacker Charlie Miller, who's worked with Oberheide to spot flaws in Google's Play app store, told Ars Technica.

Apple faced the same problems with Mac OS X 6 Snow Leopard, which also didn't fully implement ASLR. Mac OS X 7 Lion did, and so did Apple's iOS mobile platform beginning with version 4.3.

"The deficiencies in ICS [Ice Cream Sandwich] pointed out in our previous blog post have all been addressed in Jelly Bean, giving it full stack, heap/brk, lib/mmap, linker and executable ASLR," wrote Oberheide.

That doesn't mean Android is now impregnable. Oberheide pointed out some further Linux-based security measures that Android, itself a version of Linux, has not yet added.

[ 8 Hidden Smartphone Threats to Watch For ]

Ignoring the elephant in the room

And there's one other thing. As security guru Mikko Hypponen tweeted about Oberheide's findings, "Most Android malware doesn't use exploits to infect."

Most Android malware instead consists of malicious apps, which the security measures detailed by Oberheide don't really address.

Anyone can create an Android app, and any Android device can install it. The only thing standing between the two is the device's user, who is given a list of permission about what each app can do, but is often unqualified to judge whether an app is safe.

The Apple model

Apple avoids this with two "code signing" practices. First, every iOS app is inspected and certified by Apple, and every iOS device installs only apps with Apple's certification — which usually means only apps from the iTunes App Store. ( Jailbroken iOS devices can get around those requirements.)

Second, Apple "freezes" most apps after inspection. Each iOS app is run through an algorithm that generates a numerical value.

That value, or checksum, is installed on an iOS device along with the app, and the device generates its own checksum upon loading to make certain the app hasn't been tampered with.

Google's main code-signing requirement is simply that each Android developer "signs" his or her own code to verify its source.

Google doesn't require that apps come only from the official Google Play app store. That option can be selected on an Android device, but even the Google Play store, formerly the Android Market, has contained malicious apps.

Google also doesn't seem to require app checksums, which means that an Android app could modify itself after installation.

We've reached out to Google for clarification on that last detail.

© 2012 SecurityNewsDaily. All rights reserved

Discuss:

Discussion comments

,

Most active discussions

  1. votes comments
  2. votes comments
  3. votes comments
  4. votes comments