Rice University logo
 
Top blue bar image
A graduate seminar: current topics in computer security
 

Archive for the ‘Uncategorized’ Category


How Stuxnet Is Rewriting the Cyberterrorism Playbook

November 27th, 2012 by Josue Salazar

In the IEEE Spectrum podcast “This Week in Technology” titled “How Stuxnet Is Rewriting the Cyberterrorism Playbook” Steven Cherry holds a conversation with Ralph Lagner (expert in industrial systems security and CEO of the German consulting company Lagner Communcations) who was the first independent expert to analyze Stuxnet’s code and discover that the worm was designed to attack a specific target. Stuxnet infects Windows computers but only affects programmable logic controllers (PLC) made by Siemens which are used to control automated processes in industrial settings. Since it was found to affect only nuclear plants in Iran, it has been speculated that Stuxnet was built to specifically sabotage the Iranian nuclear industry.

As explained by Ralph, the PLC is the interface between a program and the actual machine that operates in real time to perform a task. Stuxnet was created to inject code on the PLC and activate only when certain specific program is loaded and certain conditions are met. For such reason, even though there have been thousands of infections in industrial equipment, the only reported damage caused by Stuxnet has happened in the Iran nuclear plant in Bushehr and a uranium enrichment facility in Natanz. Ralph further comments confirming that whoever created Stuxnet had heavy insider knowledge in how the PLC interface driver works and knowledge about the memory architecture of the Siemens PLC being used in the target facilities. Thus, Ralph inferred by analyzing Stuxnet code that the goal was to destroy very hard to replace specific aggregates of the target facility so that the whole Iran nuclear power program would be set back for at least a year.

Since the power plants are not directly connected to the internet, the Stuxnet creators had to device a clever way to infect the facilities. Such mean was through USB thumb drives used by the engineers that could be infected using their on Windows based computers (already infected with Stuxnet) and then transferring the infection to the actual machines at the facilities when loading their programs or updates for the PLCs.

In conclusion, Ralph ascertains that Stuxnet (being a taste of a real cyberweapon capable of producing physical damage) is an example of a future asset in cyberwars. Now that Stuxnet technology is present in the wild, it poses a new hazard to the public because it could be analyzed and used to create toolkits for hackers to perform new dangerous attacks.

Cyber attacks on Critical Infrastructure

November 26th, 2012 by Josue Salazar

In the IEEE Spectrum “Techwise Conversations” podcast with the title “The Critical Thread to Critical Infrastructure”, Steven Cherry invited Steve Chabinshky, FBI Cyber Division’s deputy assistant director, to talk about the cyber threads our nation is facing from attackers with different objectives. Some of these attackers want to control our machines for unlawful activities, want to steal our private information such as credit cards, etc. But something we don’t think too often is the fact that critical infrastructures such as nuclear and power plants, chemical processing plants, electrical grids, transportation, and others are controlled by systems which could become a target for cyber terrorist attackers seeking to damage the nations infrastructure. In fact, these type of attacks have been found by the FBI to be the conversation of terrorist organizations with plans to induce damage to the United States. The damage resulting from a security breach to any of these infrastructures could result in deaths of thousands of people.

With the increasing pervasiveness of technology, we have become more vulnerable to cyber attacks. For example, our cars are filled with embedded systems that control specific critical tasks such as acceleration, brakes system, and engine. If an attack were made on the manufacturer’s production line so to trigger erratic behavior on the car once the user reaches certain acceleration, there would be a lot of accidents with probable deaths and would lead the company to possible bankruptcy.

In order to fight back cyber attacks, efficient security models need to be put in place. Steve Chabinsky breaks the risk model into three: vulnerabilities, threats, and consequences. He mentions that the cybersecurity efforts nowdays to reduce vulnerabilities to zero has been a never-ending game since a specific system cannot be impenetrable unless it is completely isolated in every way from the internet; which is not the desire of todays systems. Therefore, more focus needs to be aimed to threat reduction and deterring of threat actors in order to diminish peoples risks. More importantly, these security models need to be prioritized to critical infrastructures since there are no existing architectures to protect them.

A final point touched by the article was that there is a lack of options to prosecute attackers due to the insufficient evidence of the attack. Therefore, according to Steven Chabinsky, new technology research needs to focus on two factors: assurance and attribution. Providing the architecture to assure that the software and data can be trusted and providing a way to trace back the attacker in order to hold it accountable.

 

Threat in Online Voter Registration

November 24th, 2012 by Apoorv Agarwal

Recently reading about the Online Voter Registration process where in which a voter could easily register for the Presdential elections held this year. The process is that the voter could easily register with any photo Id card such as driver’s license or a copy of a current utility bill, bank statement, paycheck or other government document showing name and address.

The researchers from the University of Michigan, the Lawrence Livermore National Laboratory and a former president of the Association for Computing Machinery informed government officials from Maryland and Washington D.C that these system is security vulnerable. As in case of Maryland driver’s license that is used for registration are derived from the person’s full name and birth-date which is right now difficult to hard to find, so a hacker could possibly make changes in the registration information and it may happen that person might end up not voting.

The solution offered to this problem computer scientists were to use the non public information like the 4 digits of SSN for the registration. But according to me even that information is also not private any more as that is spread in different places like when you create rent or buy the property you share the SSN info along with that the SSN is also widely available in the taxpayer forms.

Recent hacked in the South Carolina Revenue dept had resulted in about 3.6 million SSN stolen/leaked along with that many other information about the tax payer, this event had happened during Aug-27 to Oct-10. Though the officials haven’t yet understood the usage of those information. This breach confirms that the solution offered as using SSN for voter registration may not be the optimal solution. This breach also gives some reasons that this hack may happened to give some advantage in election.

So still the possible solution for voter registration remained open….

References:

1)http://www.washingtonpost.com/local/md-politics/marylands-online-voter-registration-vulnerable-to-attack-researchers-say/2012/10/16/acc24cf6-17c0-11e2-a55c-39408fbe6a4b_story.html

2)http://www.thestate.com/2012/10/26/2496396/south-carolina-taxpayers-privacy.html#.ULB_ZZv1T0E

3)http://www.nytimes.com/2012/10/13/us/politics/cracks-in-maryland-and-washington-voter-databases.html?_r=0

4)http://columbia.wistv.com/news/news/53925-south-carolina-launches-online-voter-registration-system-scvotesorg

Security and Usability of CAPTCHAs

November 21st, 2012 by rs35

 

In this post, I will try to present the different flavors of CAPTCHAs and try to analyze their security, usability and future directions.

reCAPTCHA: Human-Based Character Recognition via Web Security Measures

Let’s start by looking at text based CAPTCHAs. reCAPTCHA is the most popular and widely used text based CAPTCHA. Solving a CAPTCHA requires humans to perform a task that computers cannot yet. Research was focussed on whether such effort could be used to do something useful. The result was CAPTCHAs which helps in digitizing old printed material by asking users to decipher scanned words from books that computerized optical character recognition failed to recognize.

An example of reCAPTCHA is given below. As you can see there are 2 words in the CAPTCHA : An already known “control” word and a word that needs to be deciphered. The words are distorted to make sure that automated programs are not able to recognize it. If users correctly type the control word, the system assumes they are human and gains confidence that they also typed the other word correctly. To account for human error, each suspicious word is sent to multiple users.

It’s really a cool idea to harness wasted human effort to achieve something that the computers cannot do. Moreover, the words are known to fail with OCR techniques. So any automated recognition techniques would fail.

As the strength of OCR techniques improve, the level of distortion and noise will have to be  increased in text based CAPTCHAs. This would affect the usability. So the future research will have to focus on other forms of CAPTCHA which could provide improved user experience.

 

A reCAPTCHA

NuCaptcha

NuCaptcha is one of the most popular video based CAPTCHAs. They claim to provide the best security and usability and also serves millions of CAPTCHAs every day. Following link shows an instance of NuCaptcha.

NuCaptcha Documentation Basic2 Outdoor_1

As we discussed in the class there are automated techniques to break these. Most of these techniques makes one of the following assumptions

  • The color of the codeword characters is known to the attacker
  • The codewords have a distinctly different trajectory from the non-codeword characters and other objects in the background

To make stronger video CAPTCHAs, the above conditions needs to be taken care. On the other hand, the usability of these CAPTCHAs are excellent. Research could also focus on CAPTCHAs based on Emerging images.


ESP-PIX and PICATCHA

A ESP-PIX CAPTCHA

Instead of typing letters, the user needs to authenticate himself as a human by recognizing what object is common in a set of images. ESP-PIX was the first example of a CAPTCHA based on image recognition. PICATCHA took one step further by  using CAPTCHAs as a platform for advertising.

A PICATCHA

The usability of these CAPTCHAs is clearly much better than the other CAPTCHAs. But this needs to have a large database of images. Machine learning based techniques could break these CAPTCHAs. Image interaction CAPTCHAs face many potential problems which have not been fully studied.

 

Audio CAPTCHAs

Audio CAPTCHAs were introduced for visually impaired users who surf the Web using screen-reading programs. Typical audio CAPTCHAs consist of one or several speakers saying letters or digits at randomly spaced intervals. A user must correctly identify the digits or characters spoken in the audio file to pass the CAPTCHA. To make this test difficult for current computer systems, specifically automatic speech recognition (ASR) programs, background noise is injected into the audio files. [8] presents attacks using machine learning techniques to break about 70% of the audio CAPTCHAs.

Conclusion

Text based CAPTCHAs continue to be the most popularly used ones and they will continue to be so till OCR techniques can match a human being. Most of the newer techniques are vulnerable to automated attacks and needs to improve their security to be adopted.

References

  1. http://www.captcha.net/
  2. http://picatcha.com
  3. http://nucaptcha.com
  4. Luis von Ahn, Ben Maurer, Colin McMillen, David Abraham and Manuel Blum. reCAPTCHA: Human-Based Character Recognition via Web Security Measures. In Science.
  5. Y. Xu, G. Reynaga, S. Chiasson, J.-M. Frahm, F. Monrose, P. van Oorschot, Security and Usability Challenges of Moving-Object CAPTCHAs: Decoding Codewords in Motion, USENIX Security 2012 (Seattle, WA, August 2012).
  6. http://server251.theory.cs.cmu.edu/cgi-bin/esp-pix/esp-pix
  7. http://www.ischool.berkeley.edu/files/student_projects/picatcha_mims_final_report_summary_0.pdf
  8. Jennifer Tam, Jiri Simsa, Sean Hyde, and Luis von Ahn. Breaking Audio CAPTCHAs. InAdvances in Neural Information Processing Systems (NIPS).

Google Sheds Light on New Android App Scanner

November 18th, 2012 by ceb5

I recently found this article on ThreatPost which detailed Google’s new app scanner which debuted with their latest version of Android, 4.2. With the new version of Jelly Bean, Google included an app verifier which is active by default. When users try to download an unsafe application with the verifier running, they will be notified that the app is either dangerous or potentially dangerous. If the app is potentially dangerous, the user can select to continue despite the threat. However, if the verifier finds the app actually dangerous, the installation will be blocked completely.

Via a previous article from Threatpost, the app scanner works on the client-side by scanning any non-Play Store apps against a database of known malicious apps. While Google can test any apps that exist server-side using more advanced techniques, this new scanner allows users to have at least a base level of security for any Android application.

By using the scanner, users should keep in mind that they are also giving Google access to some of their phones information. Namely, by using the app scanner, Google has access to URLs related to the app and general device information (ID, current build, IP address, etc.). Though the article does not mention anything, I assume Google would use this information to track down malicious app developers. By logging the URL, for example, that a download link for a dangerous app existed, Google could investigate any other potentially dangerous downloads on that URL.

For users who still wish to install third-party apps or do not want to share their information with Google, the app scanner can be turned off in the Settings menu for a device. For the general user, though, Google’s new app verifier is definitely a step in the right direction for Android. By introducing the scanner, Google should be able to bring the number of malicious apps attacking users down and seek out related apps in their own market.

 

ROP without returns

November 15th, 2012 by ksm2

“Return oriented programming without returns” (http://dl.acm.org/citation.cfm?id=1866370) by Checkoway et. al. is a fantastic paper which talks about how current strategies to thwart ROP are in-sufficient, and that only a comprehensive CFI solution is the savior.

How does an exploit based on ROP work ? Return oriented programming utilizes small sets of instruction (2-3 instructions called gadgets) ,present in existing codebase such as loaded from libraries, to leave the system in a vulnerable state. Via buffer-overflow attack, ROP injects addresses to these gadgets  into the stack stack. On execution of these gadgets (for eg: pop %eax; jmp *%eax), the attacker can build (mostly) any function call such as : opening a shell, which can then be used to perform other malicious activities.

What could be the defenses against ROP? address space randomization so that attacker cannot determine the addresses of the gadgets (but some pieces of work show that a brute force attack can overcome this strategy; especially a 32-bit address space). Other Current small scale defenses against ROP include : look for instruction streams that look for frequent returns (Davi et. al.) by dynamic instrumentation; last-in first-out stack invariant (a compiler based strategy ) used in stack shield; avoiding RET (this is part of our strategy in our project).

But this paper (ROP without returns) renders all the above defenses useless except the comprehensive CFI (control flow integrity check) based solution. The paper is able to do this by exploiting the fact that returns are not necessary for moving through code regions. A ret instruction can be “emulated” by utilizing a small gadget such as “pop %eax; jmp %*eax”. Agreed, that it is difficult to find this gadget. But, we have to find only 1 instance of this gadget in a huge code region (created by us by calling into different libraries). The authors say that after 1 instance of this is found, the update-load-branch functions as a trampoline; each instruction seq used in the gadget set is made to end in an indirect jump to this trampoline, which then pushes it to other gadgets.

Emulating Attack on google android based ARM device:

In this attack, the authors utilized the fgets function which allows to read 0 bytes (ensure that there is a 0x00; used for initializing the SP ); this is needed as their gadget structure always starts at 0x00. The attack they perform is to make a call to the system command; the steps followed are as follows:

* initialize the registers r6 and sp with setjmp (these statements are almost stated-in-context from the paper)

* load r3 with the update-load-branch gadget address

* load address of interpreter in r0

*involve lib system function *(last 2 are gadgets with instruction sequences that can be found out from the paper).

Conclusion:

In light of this paper, we in our project are going to follow other approaches such as randomizing the order of optimization, and evaluate its usefulness in thwarting ROP attacks.

Why passwords cannot protect us anymore.

November 15th, 2012 by dm14

This blog is a short version of the article at wired.com.

The author makes a strong claim that text based passwords are no more a safe way to authenticate no matter how complicated or unique the password is.

He highlights the fact that hackers are increasingly being capable of harvesting password dumps and releasing lists after breaking into the computer systems. The password system becomes even weaker because of the fact that a lot of our personal data is stored on the cloud and the mechanisms to password retrieval are weak.

A real life example for the above fact is listed in my previous blog “A new kind of hack”. Because most of the password recovery mechanisms are linked to your email, it becomes a single point of failure. The author gives an example of how AOL password recovery mechanism asks for the city you were born in, which can be easily extracted from your Google profile.

Before we think of better ways to secure, lets look at the main requirements of any security system. Convenience and privacy are two main factors. An authentication mechanism needs to be convenient to be used in our day-to-day lives. Privacy plays an important role too.  You would not feel comfortable to allow a third person to watch your every move. Decades of research have been done in developing systems that adhere to both the requirements. The research finally concluded on having stronger passwords.

So what are the common ways of password failure?

1)   Guessing: Simple and common passwords are easy to guess. “password” and “123456” were listed among the top 10,000 most common passwords.

2)   Reusing passwords: A common and a real threat is due to reusing passwords among different accounts.

3)   Trickery: Phishing is a commonly employed technique in which users are tricked into entering their credentials on a fake site.

4)   Malware: Key loggers, screen shot capture and other techniques are employed by malware writers to extract confidential information.

Finally what we are looking for is ways to identify someone.  Jumping into conclusions like using biometrics could also be disastrous since unlike passwords, fingerprints cannot be changed. Mission impossible movies have shown us that it is easy to lift off your fingerprint using a simple glass.

Multifactor authentication systems are going to be the only solution for better security. Google already employs a two-factor authentication mechanism and it’s just the starting point. Metrics like voice, location and probably DNA could be added to the list of factors.

Does anyone see a possible research paper on this?

Samsung Galaxy S3 stores passwords in plain text

November 12th, 2012 by Yiting

It is discovered that Samsung’s own S-Memo app is storing passwords in plain text. This is quite surprising. Samsung Galaxy S3, the most popular Android smartphone on the market right now, doesn’t use any technique to protect the passwords. We enter a lot of passwords into smarphones nowadays, such as social networks, emails, and even bank accounts. We are so used to it that we tend to ignore the associated vulnerabilities. With a rooted Android phone, you can get access to every file on the device. Just like in desktop Linux, super user access means you can break whatever you want, and the OS can do very little to stop you.

A common approach for password security is to store a hashed form of the plaintext password. A cryptographic hash algorithm is applied when a user types in a password. The user is permitted access only if the has value generated from the user’s entry matches that stored in the system. The hash value is created by applying a cryptographic hash function to a string consisting of the submitted password. If a cryptographic hash function is well designed, it is computationally infeasible to reverse the function to recover a plaintext password.

Fortunately, the volume of people affected immediately is not large. In addition, rooting the Samsung Galaxy S3 requires some uncommon software tools. So your phone is not easily hacked as long as it is not rooted already. The problem is not difficult to fix. Given the severity of the discovery, Samsung may respond quickly and update the S-Memo app with secure password storage.

The old problem

November 6th, 2012 by Yanxin

Here’s a blog post about Google went offline on Nov. 6th for a short period. Basically what happened was that one AS in the route to Google announced incorrect IP addresses, which caused abnormal routes. As the article says, “the route has ‘leaked’ past normal paths”. What caused that happen? The arthor says that BGP (Border Gateway Protocol, a protocol that is used by different ASes to communicate with each other) is a trust-based system. One AS sent out incorrect information and other ASes trusted it. That was the problem.

Well, how can we solve this? Obviously, different ASes should not trust each other so easily. Maybe when one AS announces which IPs are inside its network to other ASes, other ASes can manually check other those IP are actually inside the source AS. Basically, one AS has the knowledge about its adjacent AS’s network.

To me, this sounds like a very old problem. In the very beginning, people used telnet a lot, but telnet does not do any encryption because the designer originally thought people would not do bad things. The original author trusted each other on the internet. However, it turned out we are using ssh now. I think the same thing will happen to BGP.

SSL (Libraries) *Broken*

November 6th, 2012 by ksl3

This is an interesting paper covering libraries that do the validation of SSL certifications, arguably one of the most fundamental piece of digital security today.

This paper shows unclear syntax and insecure defaults of common SSL libraries such as OpenSSL, JSSE and GnuTLS have led to faulty ssl verification in non-browser software. This covers everything from Amazon’s EC2 library to PayPal’s merchant SDK.

For example, in php, setting CURLOPT_SSL_VERIFYHOST to true actually disables certification verification. This is because this method takes in an integer as an argument, where setting it to 1 just checks for a common name in the SSL peer certificate and setting it to 2 actually enables verification. The confusion comes from the fact that most CURLOPT_* commands take a boolean but in this case, a boolean will be translated to a 1 which will cause insecure behavior.

This paper makes the (justified) statement that SSL libraries expose too many interfaces and delegate too much responsibility to the application to handle the SSL connection. Programmers, even one’s well versed in SSL nuisances, are likely to miss something in the configuration and thus expose customers to man in the middle attacks by not truly verifying SSL.

I think this would make a good paper in next years class and is a cool example of security weakness that arise of over complicated systems/apis.