Our research seeks to empower individuals and organizations to control how their data is used. We use techniques from cryptography, programming languages, machine learning, operating systems, and other areas to both understand and improve the security of computing as practiced today, and as envisioned in the future.

Everyone is welcome at our research group meetings (most Fridays at 11am, but join the slack group for announcements). To get announcements, join our Slack Group (any @virginia.edu email address can join themsleves, or email me to request an invitation).

Projects

Adversarial Machine Learning
EvadeML

Secure Multi-Party Computation
Obliv-C · MightBeEvil

Recent Posts

Brink Essay: AI Systems Are Complex and Fragile. Here Are Four Key Risks to Understand.

Brink News (a publication of the The Atlantic) published my essay on the risks of deploying AI systems.

Artificial intelligence technologies have the potential to transform society in positive and powerful ways. Recent studies have shown computing systems that can outperform humans at numerous once-challenging tasks, ranging from performing medical diagnoses and reviewing legal contracts to playing Go and recognizing human emotions.

Despite these successes, AI systems are fundamentally fragile — and the ways they can fail are poorly understood. When AI systems are deployed to make important decisions that impact human safety and well-being, the potential risks of abuse and misbehavior are high and need to be carefully considered and mitigated.

What Is Deep Learning?

Over the past seven decades, automatic computing has astonishingly amplified human intelligence. It can execute any information process a human understands well enough to describe precisely at a rate that is quadrillions of times faster than what any human could do. It also enables thousands of people to work together to produce systems that no individual understands.

Artificial intelligence goes beyond this: It allows machines to solve problems in ways no human understands. Instead of being programmed like traditional computing, AI systems are trained. Human engineers set up a training environment and methods, and the machine learns how to solve problems on its own. Although AI is a broad field with many different directions, much of the current excitement is focused on a narrow branch of statistical machine learning known as “deep learning,” where a model is trained to make predictions based on statistical patterns in a training data set.

In a typical training process, training data is collected, and a model is trained to recognize patterns in this data — as well as patterns in those learned patterns — in order to make predictions about new data. The resulting model can include millions of trained parameters, while providing little insight into how it works or evidence as to which patterns it has learned. It can, however, result in remarkably accurate models when the data used for training is well-distributed and correctly labeled and the data the model needs to make predictions about in deployment is similar to that training data.

When it is not, however, lots of things can go wrong.

Dogs Also Play in the Snow

Models learn patterns in the training data, but it is difficult to know if what they have learned is relevant — or just some artifact of the training data. In one famous example, a model that learned to accurately distinguish wolves and dogs had actually learned nothing about animals. Instead, what it had learned was to recognize snow, since all the training examples with snow were wolves, and the examples without snow were dogs.

In a more serious example, a PDF malware classifier trained on a corpus of malicious and benign PDF files to produce an accurate model to distinguish malicious PDF files from normal documents actually learned incidental associations, such as “a PDF file with pages is probably benign.” This is a pattern in the training data, since most of the malicious PDFs do not bother to include any content pages, just the malicious payload. But, it’s not a useful property for distinguishing malware, since a malware author can easily add pages to a PDF file without disrupting its malicious behavior.

Adversarial Examples

AI systems learn about the data they are trained on, and learning algorithms are designed to generalize from that data, but the resulting models can be fragile and unpredictable.

Organizations deploying AI systems need to carefully consider how those systems can fail and limit the trust placed in them.

Researchers have developed methods that find tiny perturbations, such as modifying just one or two pixels in an image or changing colors by an amount that is imperceptible to humans, that are enough to change the output prediction. The resulting inputs are known as adversarial examples. Some methods even enable construction of physical objects that confuse classifiers — for example, color patterns can be printed on glasses that lead face-recognition systems to misidentify people as targeted victims.

Reflecting and Amplifying Bias

The behavior of AI systems depends on the data they are trained on, and models trained on biased data will reflect those biases. Many well-minded efforts have sought to use algorithms running on unbiased machines to replace the inherently biased humans who make critical decisions impacting humans such as granting loans, whether a defendant should be released pending trial and which job candidates to interview.

Unfortunately, there is no way to ensure the algorithms themselves are unbiased, and removing humans from these decision processes risks entrenching those biases. One company, for example, used data from its current employees to train a system to scan resumes to identify interview candidates; the system learned to be biased against women, since the resumes it was trained on were predominantly from male applicants.

Revealing Too Much

AI systems trained on private data such has health records or emails learn to make predictions based on patterns in that data. Unfortunately, they may also reveal sensitive information about that training data.

One risk is membership inference, which is an attack where an adversary with access to a model trained on private data can learn from the model’s outputs whether or not an individual’s record was part of the training data. This poses a privacy risk, especially if the model is trained on medical records for patients with a particular disease. Models can also memorize specific information in their training data. A language model trained on an email corpus might reveal social security numbers contained in those training emails.

What Can We Do?

Many researchers are actively working on understanding and mitigating these problems — but although methods exist to mitigate some specific problems, we are a long way from comprehensive solutions.

Organizations deploying AI systems need to carefully consider how those systems can fail and limit the trust placed in them. It is also important to consider whether simpler and more understandable methods can provide equally good solutions before jumping into complex AI techniques like deep learning. In one high-profile example, where considering an AI solution should have raised some red flags, a model for predicting recidivism risk was suspected of racial bias in its predictions. A simple model using only three rules based on age, sex and number of prior offenses was found to make equally good predictions.

AI technologies show great promise and have demonstrated capacity to improve medical diagnosis, automate business processes and free humans from tedious and unrewarding tasks. But decisions about using AI need to also pay attention to the risks and potential pitfalls in using complex, fragile and poorly understood technologies.


Google Federated Privacy 2019: The Dragon in the Room

I’m back from a very interesting Workshop on Federated Learning and Analytics that was organized by Peter Kairouz and Brendan McMahan from Google’s federated learning team and was held at Google Seattle.

For the first part of my talk, I covered Bargav’s work on evaluating differentially private machine learning, but I reserved the last few minutes of my talk to address the cognitive dissonance I felt being at a Google meeting on privacy.

I don’t want to offend anyone, and want to preface this by saying I have lots of friends and former students who work for Google, people that I greatly admire and respect – so I want to raise the cognitive dissonance I have being at a “privacy” meeting run by Google, in the hopes that people at Google actually do think about privacy and will able to convince me how wrong I am.

But, it is necessary to address the elephant in the room — we are at a privacy meeting organized by Google.

Or rather, in this case its the Dragon that Owns the Room.


It may be a cute, colorful, and even non-evil Dragon, but it has a huge appetite!


This quote is from an essay by Maciej Cegłowski (the founder of Pinboard), The New Wilderness:

Seen in this light, the giant tech companies can make a credible claim to be the defenders of privacy, just like a dragon can truthfully boast that it is good at protecting its hoard of gold. Nobody spends more money securing user data, or does it more effectively, than Facebook and Google.

The question we need to ask is not whether our data is safe, but why there is suddenly so much of it that needs protecting. The problem with the dragon, after all, is not its stockpile stewardship, but its appetite.

We’re also working hard to challenge the assumption that products need more data to be more helpful. Data minimization is an important privacy principle for us, and we’re encouraged by advances developed by Google A.I. researchers called “federated learning.” It allows Google’s products to work better for everyone without collecting raw data from your device. ... In the future, A.I. will provide even more ways to make products more helpful with less data.

Even as we make privacy and security advances in our own products, we know the kind of privacy we all want as individuals relies on the collaboration and support of many institutions, like legislative bodies and consumer organizations.

Maciej’s essay was partly inspired by the recent New York Times opinion piece by Google’s CEO: Google’s Sundar Pichai: Privacy Should Not Be a Luxury Good.

If you haven’t read it, you should. It is truly a masterpiece in obfuscation and misdirection.

Pichai somehow makes the argument that privacy and equity are in conflict, and that Google’s industrial-scale surveillance model is necessary to make its products accessible to poor people.

The piece also highlight the work the team here has done on federated learning — terrific visibility and recognition of the value of the research, but notably, right before getting into discussion about government privacy regulation.

The question I want to raise for the Google researchers and engineers working on privacy, is what is the actual purpose of this work for the company?

I distinguish small "p" privacy from big "P" Privacy.

Small "p" privacy is about protecting corporate data from outsiders. This used to be called confidentiality. If you only believe in small "p" privacy, there is no difficultly in justifying working on privacy at Google.

Big "P" Privacy views privacy as an individual human right, and even more, as a societal value. Maciej calls this ambient privacy. It is hard to quantify or even understand what we lose when we give up Privacy as individuals and as a society, but the thought of living in a society where everyone is under constant surveillance strikes me as terrifying and dystopian.

So, if you believe in Privacy, and are working on privacy at Google, you should consider whether the purpose (for the company) of your work is to improve or harm Privacy.

Given the nature or Google's business, you should start from the assumption that its purpose is probably to harm Privacy, and be self-critical in your arguments to convince yourself that it is to improve Privacy.

There are many ways technically sound and successful work on improving privacy could be used to actually harm Privacy. For example,

  • Technical mechanisms for privacy can be used to jusfify collecting more data. Collecting more data is harmful to Privacy even if it is done in a way that protects individual privacy and ensures that sensitive data about individuals cannot be inferred. And that's the best case — it assumes everything is implemented perfectly with no technical mistakes or bugs in the code, and that parameters are set in ways that provide sufficient privacy, even when this means accepting unsatisfactory utility.
  • Privacy work can be used by companies to delay, mislead, and confuse regulators, and to provide public relations opportunities that primarily serve to confuse and mislead the public. There can, of course, be beneficial publicity from privacy research, but its important to realize that not all publicity is good publicity, especially when it comes to how companies use privacy research.

Maciej's essay draws an analogy between Google's interest in privacy, and the energy industry's interest in pollution. I'll make a slightly different analogy here, focusing on the role of scientists and engineers at these companies.

Of course, comparing Google to poison pushers and destroyers of the planet is grossly unfair.


Tobacco Executives testifying to House Energy and Commerce Subcommittee on Health and the Environment that Cigarettes are not Addictive, April 1994

Twitter CEO Jack Dorsey, Facebook COO Sheryl Sandberg, and empty chair for Google testifying to Senate Intelligence Committee, September 2018

For one thing, when congress called the tobacco executives to account to the public for their behavior, they actually showed up.

I'm certainly not here to defend tobacco company executives, though. The more relevant comparison is to the scientists who worked at these companies.

The tobacco and fossil fuel companies had good scientists, who did work to understand the impact of their industry. Some of those scientists reached conclusions that were problematic for their companies. Their companies suppressed or distorted those results, and emphasized their investments in science in glossy brochures to influence public policy and opion.

So, my second challenge to engineers and researchers at Google who value Privacy, is do be doing work that potentially could lead to results the company would want to suppress.

This doesn’t mean doing work that is hostile to Google (recall that Wigand’s project at Brown & Williamson Tobacco was to develop a safer cigarette). But it does mean doing research to understand the scale and scope of privacy loss resulting from Google’s products, and to measure its impact on individual behavior and society.

Google’s researchers are uniquely well positioned to do this type of research — they have the technical expertise and talent, access to data and resources, and opportunity to do large scale experiments.

Reactions

I was a bit worried about giving this talk to an audience at Google (about 40 Googlers and 40 academic researchers in the audience, as well as a live stream that I know some people elsewhere at Google were watching), especially with a cruise on Lake Washington later in the day. But, all the reactions I got were very encouraging and positive, with great willingness from the Googlers to consider how people outside might perceive their company and interest in thinking about ways they can do better.

My impression is the engineers and researchers at Google do care about Privacy, and have some opportunities to influence corporate decisions, but its a large and complex company. From the way academics (especially cryptographers) reason about systems, once you trust Google to provide your hardware or operating system they are a trusted party and can easily access and control everything. From a complex corporate perspective, there are big difference between data on your physical device (even if it was built by Google), in a database at Google, and stored in an encrypted form with privacy noise, even if all the code doing this is written and controlled by the same organization that has full access to the data. Lots of the privacy work at Google is motivated by reducing the internal attack surfaces, so sensitive data is exposed to less code and people within the organization. This makes sense, at least for small p privacy.

There is a privacy review board at Google (mandated by an FTC consent agreement) that conducts a privacy review of all products and can go back to engineering teams with requests for changes (and possibly even prevent a product from being launched, although Googlers were murky on how much power they would have when things come down to it). On the other hand, the privacy review is done by Google employees, who, however well meaning and ethical they are, are still beholden to their employer. This strikes me as a positive, but more like the team-employed doctors do administer the concussion protocol during football games. (Unfortunately, Google's efforts to set up an external ethics board did not go well.)

On the whole, though, I am encouraged by the discussions with the Google researchers, that there is some awareness of the complexities in working on privacy at Google, and that scientists and engineers there can provide some counter-balance to the dragon's appetite.


Graduation 2019


How AI could save lives without spilling medical secrets

I’m quoted in this article by Will Knight focused on the work Oasis Labs (Dawn Song’s company) is doing on privacy-preserving medical data analysis: How AI could save lives without spilling medical secrets, MIT Technology Review, 14 May 2019.

“The whole notion of doing computation while keeping data secret is an incredibly powerful one,” says David Evans, who specializes in machine learning and security at the University of Virginia. When applied across hospitals and patient populations, for instance, machine learning might unlock completely new ways of tying disease to genomics, test results, and other patient information.

“You would love it if a medical researcher could learn on everyone’s medical records,” Evans says. “You could do an analysis and tell if a drug is working on not. But you can’t do that today.”

Despite the potential Oasis represents, Evans is cautious. Storing data in secure hardware creates a potential point of failure, he notes. If the company that makes the hardware is compromised, then all the data handled this way will also be vulnerable. Blockchains are relatively unproven, he adds.

“There’s a lot of different tech coming together,” he says of Oasis’s approach. “Some is mature, and some is cutting-edge and has challenges.”

(I’m pretty sure I didn’t actually say “tech” in my call with Will Knight since I wouldn’t use that wording, but would say “technologies”.)


Cost-Sensitive Adversarial Robustness at ICLR 2019

Xiao Zhang will present Cost-Sensitive Robustness against Adversarial Examples on May 7 (4:30-6:30pm) at ICLR 2019 in New Orleans.

Paper: [PDF] [[OpenReview]((https://openreview.net/forum?id=BygANhA9tQ&noteId=BJe7cKRWeN)] [ArXiv]