FOSAD Trustworthy Machine Learning Mini-Course

I taught a mini-course on Trustworthy Machine Learning at the 19th International School on Foundations of Security Analysis and Design in Bertinoro, Italy.

Slides from my three (two-hour) lectures are posted below, along with some links to relevant papers and resources.

Class 1: Introduction/Attacks

The PDF malware evasion attack is described in this paper:

Weilin Xu, Yanjun Qi, and David Evans. Automatically Evading Classifiers: A Case Study on PDF Malware Classifiers. Network and Distributed System Security Symposium (NDSS). San Diego, CA. 21-24 February 2016. [PDF] []

Class 2: Defenses

This paper describes the feature squeezing framework:

Weilin Xu, David Evans, and Yanjun Qi. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. In 2018 Network and Distributed System Security Symposium. 18-21 February, San Diego, California. [PDF] [Project]
This paper introduces cost-sensitive robustness:
Xiao Zhang and David Evans. Cost-Sensitive Robustness against Adversarial Examples. In Seventh International Conference on Learning Representations (ICLR). New Orleans. May 2019. [arXiv] [OpenReview] [PDF]

Class 3: Privacy

This (free) book provides an introduction to secure multi-party computation:

David Evans, Vladimir Kolesnikov and Mike Rosulek. A Pragmatic Introduction to Secure Multi-Party Computation. NOW Publishers, December 2018. PDF is an open-source tool for building secure multi-party computations from high-level (extended C) code.

This paper describes our work on integrating differential privacy and multi-party computation:

Bargav Jayaraman, Lingxiao Wang, David Evans and Quanquan Gu. Distributed Learning without Distress: Privacy-Preserving Empirical Risk Minimization. In 32nd Conference on Neural Information Processing Systems (NeurIPS). Montreal, Canada. December 2018. [PDF] [Video Summary]

This paper summarizes our work on evaluating the privacy-utility tradeoffs for machine learning:

Bargav Jayaraman and David Evans. Evaluating Differentially Private Machine Learning in Practice. In 28th USENIX Security Symposium. Santa Clara. August 2019. [PDF] [arXiv] [code]