Left to right
: Jonah Weissman, Yonghwi Kown, Bargav Jayaraman, Aihua Chen, Hannah Chen, Weilin Xu, Riley Spahn, David Evans, Fnu Suya, Yuan Tian, Mainuddin Jonas, Tu Le, Faysal Hossain, Xiao Zhang, Jack Verrier
I had the privilege of speaking at the JASON Spring Meeting, undoubtably one of the most diverse meetings I’ve been part of with talks on hypersonic signatures (from my DSSG 2008-2009 colleague, Ian Boyd), FBI DNA, nuclear proliferation in Iran, engineering biological materials, and the 2020 census (including a very interesting presentatino from John Abowd on the differential privacy mechanisms they have developed and evaluated). (Unfortunately, my lack of security clearance kept me out of the SCIF used for the talks on quantum computing and more sensitive topics).
Congratulations to Weilin Xu for successfully defending his PhD Thesis!
Weilin’s Committee: Homa Alemzadeh, Yanjun Qi, Patrick McDaniel (on screen), David Evans, Vicente Ordóñez Román
Improving Robustness of Machine Learning Models using Domain Knowledge
Although machine learning techniques have achieved great success in many areas, such as computer vision, natural language processing, and computer security, recent studies have shown that they are not robust under attack.
Sam Havron (BSCS 2017) is quoted in an article in Wired on eradicating stalkerware:
The full extent of that stalkerware crackdown will only prove out with time and testing, says Sam Havron, a Cornell researcher who worked on last year’s spyware study. Much more work remains. He notes that domestic abuse victims can also be tracked with dual-use apps often overlooked by antivirus firms, like antitheft software Cerberus. Even innocent tools like Apple’s Find My Friends and Google Maps’ location-sharing features can be abused if they don’t better communicate to users that they may have been secretly configured to share their location.
Samin Yasar presented our paper on Context-award Monitoring in Robotic Surgery at the 2019 International Symposium on Medical Robotics (ISMR) in Atlanta, Georgia.
Robotic-assisted minimally invasive surgery (MIS) has enabled procedures with increased precision and dexterity, but surgical robots are still open loop and require surgeons to work with a tele-operation console providing only limited visual feedback. In this setting, mechanical failures, software faults, or human errors might lead to adverse events resulting in patient complications or fatalities.
We have posted a paper by Bargav Jayaraman and myself on When Relaxations Go Bad: “Differentially-Private” Machine Learning (code available at https://github.com/bargavj/EvaluatingDPML).
Differential privacy is becoming a standard notion for performing privacy-preserving machine learning over sensitive data. It provides formal guarantees, in terms of the privacy budget, ε, on how much information about individual training records is leaked by the model.
While the privacy budget is directly correlated to the privacy leakage, the calibration of the privacy budget is not well understood.
New Electronics has an article that includes my Deep Learning and Security Workshop talk: Deep fools, 21 January 2019.
A better version of the image Mainuddin Jonas produced that they use
(which they screenshot from the talk video) is below: