NeurIPS 2019

Here's a video of Xiao Zhang's presentation at NeurIPS 2019:
https://slideslive.com/38921718/track-2-session-1 (starting at 26:50)

See this post for info on the paper.

Here are a few pictures from NeurIPS 2019 (by Sicheng Zhu and Mohammad Mahmoody):






USENIX Security 2020: Hybrid Batch Attacks

Finding Black-box Adversarial Examples with Limited Queries Black-box attacks generate adversarial examples (AEs) against deep neural networks with only API access to the victim model. Existing black-box attacks can be grouped into two main categories: Transfer Attacks use white-box attacks on local models to find candidate adversarial examples that transfer to the target model. Optimization Attacks use queries to the target model and apply optimization techniques to search for adversarial examples.

Read More…

NeurIPS 2019: Empirically Measuring Concentration

Xiao Zhang will present our work (with Saeed Mahloujifar and Mohamood Mahmoody) as a spotlight at NeurIPS 2019, Vancouver, 10 December 2019. Recent theoretical results, starting with Gilmer et al.’s Adversarial Spheres (2018), show that if inputs are drawn from a concentrated metric probability space, then adversarial examples with small perturbation are inevitable.c The key insight from this line of research is that concentration of measure gives lower bound on adversarial risk for a large collection of classifiers (e.

Read More…

Jobs for Humans, 2029-2059

I was honored to particilate in a panel at an event on Adult Education in the Age of Artificial Intelligence that was run by The Great Courses as a fundraiser for the Academy of Hope, an adult public charter school in Washington, D.C. I spoke first, following a few introductory talks, and was followed by Nicole Smith and Ellen Scully-Russ, and a keynote from Dexter Manley, Super Bowl winner with the Washington Redskins.

Read More…

Research Symposium Posters

Five students from our group presented posters at the department’s Fall Research Symposium:


Anshuman Suri’s Overview Talk


Bargav Jayaraman, Evaluating Differentially Private Machine Learning In Practice [Poster]
[Paper (USENIX Security 2019)]




Hannah Chen [Poster]




Xiao Zhang [Poster]
[
Paper (NeurIPS 2019)]




Mainudding Jonas [Poster]




Fnu Suya [Poster]
[
Paper (USENIX Security 2020)]

Cantor's (No Longer) Lost Proof

In preparing to cover Cantor’s proof of different infinite set cardinalities (one of my all-time favorite topics!) in our theory of computation course, I found various conflicting accounts of what Cantor originally proved. So, I figured it would be easy to search the web to find the original proof. Shockingly, at least as far as I could find1, it didn’t exist on the web! The closest I could find was in Google Books the 1892 volume of the Jähresbericht Deutsche Mathematiker-Vereinigung (which many of the references pointed to), but in fact not the first value of that journal which contains the actual proof.

Read More…

FOSAD Trustworthy Machine Learning Mini-Course

I taught a mini-course on Trustworthy Machine Learning at the 19th International School on Foundations of Security Analysis and Design in Bertinoro, Italy. Slides from my three (two-hour) lectures are posted below, along with some links to relevant papers and resources. Class 1: Introduction/Attacks The PDF malware evasion attack is described in this paper: Weilin Xu, Yanjun Qi, and David Evans. Automatically Evading Classifiers: A Case Study on PDF Malware Classifiers.

Read More…

USENIX Security Symposium 2019

Bargav Jayaraman presented our paper on Evaluating Differentially Private Machine Learning in Practice at the 28th USENIX Security Symposium in Santa Clara, California. Summary by Lea Kissner: Hey it's the results! pic.twitter.com/ru1FbkESho — Lea Kissner (@LeaKissner) August 17, 2019 Also, great to see several UVA folks at the conference including: Sam Havron (BSCS 2017, now a PhD student at Cornell) presented a paper on the work he and his colleagues have done on computer security for victims of intimate partner violence.

Read More…

Google Security and Privacy Workshop

I presented a short talk at a workshop at Google on Adversarial ML: Closing Gaps between Theory and Practice (mostly fun for the movie of me trying to solve Google’s CAPTCHA on the last slide): Getting the actual screencast to fit into the limited time for this talk challenged the limits of my video editing skills. I can say with some confidence, Google does donuts much better than they do cookies!

Read More…

Brink Essay: AI Systems Are Complex and Fragile. Here Are Four Key Risks to Understand.

Brink News (a publication of the The Atlantic) published my essay on the risks of deploying AI systems. Artificial intelligence technologies have the potential to transform society in positive and powerful ways. Recent studies have shown computing systems that can outperform humans at numerous once-challenging tasks, ranging from performing medical diagnoses and reviewing legal contracts to playing Go and recognizing human emotions. Despite these successes, AI systems are fundamentally fragile — and the ways they can fail are poorly understood.

Read More…

All Posts by Category or Tags.