Our research seeks to empower individuals and organizations to control how their data is used. We use techniques from cryptography, programming languages, machine learning, operating systems, and other areas to both understand and improve the security of computing as practiced today, and as envisioned in the future.

Everyone is welcome at our research group meetings (most Fridays at 11am, but join the slack group for announcements). To get announcements, join our Slack Group (any @virginia.edu email address can join themsleves, or email me to request an invitation).

Projects

Adversarial Machine Learning
EvadeML

Secure Multi-Party Computation
Obliv-C · MightBeEvil

Recent Posts

How AI could save lives without spilling medical secrets

I’m quoted in this article by Will Knight focused on the work Oasis Labs (Dawn Song’s company) is doing on privacy-preserving medical data analysis: How AI could save lives without spilling medical secrets, MIT Technology Review, 14 May 2019.

“The whole notion of doing computation while keeping data secret is an incredibly powerful one,” says David Evans, who specializes in machine learning and security at the University of Virginia. When applied across hospitals and patient populations, for instance, machine learning might unlock completely new ways of tying disease to genomics, test results, and other patient information.

“You would love it if a medical researcher could learn on everyone’s medical records,” Evans says. “You could do an analysis and tell if a drug is working on not. But you can’t do that today.”

Despite the potential Oasis represents, Evans is cautious. Storing data in secure hardware creates a potential point of failure, he notes. If the company that makes the hardware is compromised, then all the data handled this way will also be vulnerable. Blockchains are relatively unproven, he adds.

“There’s a lot of different tech coming together,” he says of Oasis’s approach. “Some is mature, and some is cutting-edge and has challenges.”

(I’m pretty sure I didn’t actually say “tech” in my call with Will Knight since I wouldn’t use that wording, but would say “technologies”.)


Cost-Sensitive Adversarial Robustness at ICLR 2019

Xiao Zhang will present Cost-Sensitive Robustness against Adversarial Examples on May 7 (4:30-6:30pm) at ICLR 2019 in New Orleans.

Paper: [PDF] [[OpenReview]((https://openreview.net/forum?id=BygANhA9tQ&noteId=BJe7cKRWeN)] [ArXiv]


Empirically Measuring Concentration

Xiao Zhang and Saeed Mahloujifar will present our work on Empirically Measuring Concentration: Fundamental Limits on Intrinsic Robustness at two workshops May 6 at ICLR 2019 in New Orleans: Debugging Machine Learning Models and Safe Machine Learning: Specification, Robustness and Assurance.

Paper: [PDF]


SRG Lunch

Some photos for our lunch to celebrate the end of semester, beginning of summer, and congratulate Weilin Xu on his PhD:


Left to right: Jonah Weissman, Yonghwi  Kown, Bargav Jayaraman, Aihua Chen, Hannah Chen, Weilin Xu, Riley Spahn, David Evans, Fnu Suya, Yuan Tian, Mainuddin Jonas, Tu Le, Faysal Hossain, Xiao Zhang, Jack Verrier


JASON Spring Meeting: Adversarial Machine Learning

I had the privilege of speaking at the JASON Spring Meeting, undoubtably one of the most diverse meetings I’ve been part of with talks on hypersonic signatures (from my DSSG 2008-2009 colleague, Ian Boyd), FBI DNA, nuclear proliferation in Iran, engineering biological materials, and the 2020 census (including a very interesting presentatino from John Abowd on the differential privacy mechanisms they have developed and evaluated). (Unfortunately, my lack of security clearance kept me out of the SCIF used for the talks on quantum computing and more sensitive topics).

Slides for my talk: [PDF]