How AI could save lives without spilling medical secrets

I’m quoted in this article by Will Knight focused on the work Oasis Labs (Dawn Song’s company) is doing on privacy-preserving medical data analysis: How AI could save lives without spilling medical secrets, MIT Technology Review, 14 May 2019. “The whole notion of doing computation while keeping data secret is an incredibly powerful one,” says David Evans, who specializes in machine learning and security at the University of Virginia. When applied across hospitals and patient populations, for instance, machine learning might unlock completely new ways of tying disease to genomics, test results, and other patient information.

Read More…

Cost-Sensitive Adversarial Robustness at ICLR 2019

Xiao Zhang will present Cost-Sensitive Robustness against Adversarial Examples on May 7 (4:30-6:30pm) at ICLR 2019 in New Orleans.

Paper: [PDF] [[OpenReview]((https://openreview.net/forum?id=BygANhA9tQ&noteId=BJe7cKRWeN)] [ArXiv]

Empirically Measuring Concentration

Xiao Zhang and Saeed Mahloujifar will present our work on Empirically Measuring Concentration: Fundamental Limits on Intrinsic Robustness at two workshops May 6 at ICLR 2019 in New Orleans: Debugging Machine Learning Models and Safe Machine Learning: Specification, Robustness and Assurance.

Paper: [PDF]

SRG Lunch

Some photos for our lunch to celebrate the end of semester, beginning of summer, and congratulate Weilin Xu on his PhD:


Left to right: Jonah Weissman, Yonghwi  Kown, Bargav Jayaraman, Aihua Chen, Hannah Chen, Weilin Xu, Riley Spahn, David Evans, Fnu Suya, Yuan Tian, Mainuddin Jonas, Tu Le, Faysal Hossain, Xiao Zhang, Jack Verrier

JASON Spring Meeting: Adversarial Machine Learning

I had the privilege of speaking at the JASON Spring Meeting, undoubtably one of the most diverse meetings I’ve been part of with talks on hypersonic signatures (from my DSSG 2008-2009 colleague, Ian Boyd), FBI DNA, nuclear proliferation in Iran, engineering biological materials, and the 2020 census (including a very interesting presentatino from John Abowd on the differential privacy mechanisms they have developed and evaluated). (Unfortunately, my lack of security clearance kept me out of the SCIF used for the talks on quantum computing and more sensitive topics).

Read More…

Congratulations Dr. Xu!

Congratulations to Weilin Xu for successfully defending his PhD Thesis! Weilin’s Committee: Homa Alemzadeh, Yanjun Qi, Patrick McDaniel (on screen), David Evans, Vicente Ordóñez Román Improving Robustness of Machine Learning Models using Domain Knowledge Although machine learning techniques have achieved great success in many areas, such as computer vision, natural language processing, and computer security, recent studies have shown that they are not robust under attack.

Read More…

A Plan to Eradicate Stalkerware

Sam Havron (BSCS 2017) is quoted in an article in Wired on eradicating stalkerware: The full extent of that stalkerware crackdown will only prove out with time and testing, says Sam Havron, a Cornell researcher who worked on last year’s spyware study. Much more work remains. He notes that domestic abuse victims can also be tracked with dual-use apps often overlooked by antivirus firms, like antitheft software Cerberus. Even innocent tools like Apple’s Find My Friends and Google Maps’ location-sharing features can be abused if they don’t better communicate to users that they may have been secretly configured to share their location.

Read More…

ISMR 2019: Context-aware Monitoring in Robotic Surgery

Samin Yasar presented our paper on Context-award Monitoring in Robotic Surgery at the 2019 International Symposium on Medical Robotics (ISMR) in Atlanta, Georgia. Robotic-assisted minimally invasive surgery (MIS) has enabled procedures with increased precision and dexterity, but surgical robots are still open loop and require surgeons to work with a tele-operation console providing only limited visual feedback. In this setting, mechanical failures, software faults, or human errors might lead to adverse events resulting in patient complications or fatalities.

Read More…

When Relaxations Go Bad: "Differentially-Private" Machine Learning

We have posted a paper by Bargav Jayaraman and myself on When Relaxations Go Bad: “Differentially-Private” Machine Learning (code available at https://github.com/bargavj/EvaluatingDPML). Differential privacy is becoming a standard notion for performing privacy-preserving machine learning over sensitive data. It provides formal guarantees, in terms of the privacy budget, ε, on how much information about individual training records is leaked by the model. While the privacy budget is directly correlated to the privacy leakage, the calibration of the privacy budget is not well understood.

Read More…

Deep Fools

New Electronics has an article that includes my Deep Learning and Security Workshop talk: Deep fools, 21 January 2019.

A better version of the image Mainuddin Jonas produced that they use (which they screenshot from the talk video) is below:

All Posts by Category or Tags.