Here's a video of Xiao Zhang's presentation at NeurIPS 2019:
https://slideslive.com/38921718/track-2-session-1 (starting at 26:50)
See this post for info on the paper.
Here are a few pictures from NeurIPS 2019 (by Sicheng Zhu and Mohammad Mahmoody):
Finding Black-box Adversarial Examples with Limited Queries Black-box attacks generate adversarial examples (AEs) against deep neural networks with only API access to the victim model.
Existing black-box attacks can be grouped into two main categories:
Transfer Attacks use white-box attacks on local models to find candidate adversarial examples that transfer to the target model.
Optimization Attacks use queries to the target model and apply optimization techniques to search for adversarial examples.
Xiao Zhang will present our work (with Saeed Mahloujifar and Mohamood Mahmoody) as a spotlight at NeurIPS 2019, Vancouver, 10 December 2019.
Recent theoretical results, starting with Gilmer et al.’s Adversarial Spheres (2018), show that if inputs are drawn from a concentrated metric probability space, then adversarial examples with small perturbation are inevitable.c The key insight from this line of research is that concentration of measure gives lower bound on adversarial risk for a large collection of classifiers (e.
Five students from our group presented posters at the department’s
Anshuman Suri’s Overview Talk
I taught a mini-course on Trustworthy Machine Learning at the 19th International School on Foundations of Security Analysis and Design in Bertinoro, Italy.
Slides from my three (two-hour) lectures are posted below, along with some links to relevant papers and resources.
Class 1: Introduction/Attacks
The PDF malware evasion attack is described in this paper: Weilin Xu, Yanjun Qi, and David Evans. Automatically Evading Classifiers: A Case Study on PDF Malware Classifiers.
I presented a short talk at a workshop at Google on Adversarial ML: Closing Gaps between Theory and Practice (mostly fun for the movie of me trying to solve Google’s CAPTCHA on the last slide):
Getting the actual screencast to fit into the limited time for this talk challenged the limits of my video editing skills.
I can say with some confidence, Google does donuts much better than they do cookies!
Brink News (a publication of the The Atlantic) published my essay on the risks of deploying AI systems.
Artificial intelligence technologies have the potential to transform society in positive and powerful ways. Recent studies have shown computing systems that can outperform humans at numerous once-challenging tasks, ranging from performing medical diagnoses and reviewing legal contracts to playing Go and recognizing human emotions.
Despite these successes, AI systems are fundamentally fragile — and the ways they can fail are poorly understood.
Xiao Zhang will present Cost-Sensitive Robustness against Adversarial Examples on May 7 (4:30-6:30pm) at ICLR 2019 in New Orleans.
Paper: [PDF] [OpenReview] [ArXiv]
Xiao Zhang and Saeed Mahloujifar will present our work on Empirically Measuring Concentration: Fundamental Limits on Intrinsic Robustness at two workshops May 6 at ICLR 2019 in New Orleans: Debugging Machine Learning Models and Safe Machine Learning:
Specification, Robustness and Assurance.
I had the privilege of speaking at the JASON Spring Meeting, undoubtably one of the most diverse meetings I’ve been part of with talks on hypersonic signatures (from my DSSG 2008-2009 colleague, Ian Boyd), FBI DNA, nuclear proliferation in Iran, engineering biological materials, and the 2020 census (including a very interesting presentatino from John Abowd on the differential privacy mechanisms they have developed and evaluated). (Unfortunately, my lack of security clearance kept me out of the SCIF used for the talks on quantum computing and more sensitive topics).