Computer Science Colloquia
Monday, December 14, 2015
Advisor: Yanjun (Jane) Qi
Attending Faculty: Westley Weimer
10:00 AM, Rice Hall, Rm. 504
Master's Project Presentation
Robust classifiers against adversarial attacks using model
Machine learning models are widely used for detecting malwares in technology like antivirus software and email attachment scanners. However, learning models usually assume a stationary data distribution, which is mostly violated in the presence of an attacker that can manipulate test samples. To alleviate such attacks, we propose a defense strategy for randomly generating many different models through diversity tactic. Our technique can generate models quickly and allows us to implement this technique on millions of users. We provide experimental results in two different attack scenarios and show that our technique can prevent attacks on both image classifiers and PDF malware classifiers.