Research Symposium Posters

Five students from our group presented posters at the department’s Fall Research Symposium:


Anshuman Suri’s Overview Talk


Bargav Jayaraman, Evaluating Differentially Private Machine Learning In Practice [Poster]
[Paper (USENIX Security 2019)]




Hannah Chen [Poster]




Xiao Zhang [Poster]
[
Paper (NeurIPS 2019)]




Mainudding Jonas [Poster]




Fnu Suya [Poster]
[
Paper (USENIX Security 2020)]

FOSAD Trustworthy Machine Learning Mini-Course

I taught a mini-course on Trustworthy Machine Learning at the 19th International School on Foundations of Security Analysis and Design in Bertinoro, Italy. Slides from my three (two-hour) lectures are posted below, along with some links to relevant papers and resources. Class 1: Introduction/Attacks The PDF malware evasion attack is described in this paper: Weilin Xu, Yanjun Qi, and David Evans. Automatically Evading Classifiers: A Case Study on PDF Malware Classifiers.

Read More…

USENIX Security Symposium 2019

Bargav Jayaraman presented our paper on Evaluating Differentially Private Machine Learning in Practice at the 28th USENIX Security Symposium in Santa Clara, California. Summary by Lea Kissner: Hey it's the results! pic.twitter.com/ru1FbkESho — Lea Kissner (@LeaKissner) August 17, 2019 Also, great to see several UVA folks at the conference including: Sam Havron (BSCS 2017, now a PhD student at Cornell) presented a paper on the work he and his colleagues have done on computer security for victims of intimate partner violence.

Read More…

Brink Essay: AI Systems Are Complex and Fragile. Here Are Four Key Risks to Understand.

Brink News (a publication of the The Atlantic) published my essay on the risks of deploying AI systems. Artificial intelligence technologies have the potential to transform society in positive and powerful ways. Recent studies have shown computing systems that can outperform humans at numerous once-challenging tasks, ranging from performing medical diagnoses and reviewing legal contracts to playing Go and recognizing human emotions. Despite these successes, AI systems are fundamentally fragile — and the ways they can fail are poorly understood.

Read More…

Google Federated Privacy 2019: The Dragon in the Room

I’m back from a very interesting Workshop on Federated Learning and Analytics that was organized by Peter Kairouz and Brendan McMahan from Google’s federated learning team and was held at Google Seattle. For the first part of my talk, I covered Bargav’s work on evaluating differentially private machine learning, but I reserved the last few minutes of my talk to address the cognitive dissonance I felt being at a Google meeting on privacy.

Read More…

Violations of Children’s Privacy Laws

The New York Times has an article, How Game Apps That Captivate Kids Have Been Collecting Their Data about a lawsuit the state of New Mexico is bringing against app markets (including Google) that allow apps presented as being for children in the Play store to violate COPPA rules and mislead users into tracking children. The lawsuit stems from a study led by Serge Egleman’s group at UC Berkeley that analyzed COPPA violations in children’s apps.

Read More…

USENIX Security 2018

Three SRG posters were presented at USENIX Security Symposium 2018 in Baltimore, Maryland: Nathaniel Grevatt (GDPR-Compliant Data Processing: Improving Pseudonymization with Multi-Party Computation) Matthew Wallace and Parvesh Samayamanthula (Deceiving Privacy Policy Classifiers with Adversarial Examples) Guy Verrier (How is GDPR Affecting Privacy Policies?, joint with Haonan Chen and Yuan Tian) There were also a surprising number of appearances by an unidentified unicorn:

Read More…

All Posts by Category or Tags.