Can Machine Learning Ever Be Trustworthy?

I gave the Booz Allen Hamilton Distinguished Colloquium at the University of Maryland on Can Machine Learning Ever Be Trustworthy?.

[Video](https://vid.umd.edu/detsmediasite/Play/e8009558850944bfb2cac477f8d741711d?catalog=74740199-303c-49a2-9025-2dee0a195650) · [SpeakerDeck](https://speakerdeck.com/evansuva/can-machine-learning-ever-be-trustworthy)

Abstract
Machine learning has produced extraordinary results over the past few years, and machine learning systems are rapidly being deployed for critical tasks, even in adversarial environments. This talk will survey some of the reasons building trustworthy machine learning systems is inherently impossible, and dive into some recent research on adversarial examples. Adversarial examples are inputs crafted deliberately to fool a machine learning system, often by making small, but targeted perturbations, starting from a natural seed example. Over the past few years, there has been an explosion of research in adversarial examples but we are only beginning to understand their mysteries and just taking the first steps towards principled and effective defenses. The general problem of adversarial examples, however, has been at the core of information security for thousands of years. In this talk, I'll look at some of the long-forgotten lessons from that quest, unravel the huge gulf between theory and practice in adversarial machine learning, and speculate on paths toward trustworthy machine learning systems.