When Relaxations Go Bad: "Differentially-Private" Machine Learning

We have posted a paper by Bargav Jayaraman and myself on When Relaxations Go Bad: “Differentially-Private” Machine Learning (code available at https://github.com/bargavj/EvaluatingDPML).

Differential privacy is becoming a standard notion for performing privacy-preserving machine learning over sensitive data. It provides formal guarantees, in terms of the privacy budget, ε, on how much information about individual training records is leaked by the model.

While the privacy budget is directly correlated to the privacy leakage, the calibration of the privacy budget is not well understood. As a result, many existing works on privacy-preserving machine learning select large values of ϵ in order to get acceptable utility of the model, with little understanding of the concrete impact of such choices on meaningful privacy. Moreover, in scenarios where iterative learning procedures are used which require privacy guarantees for each iteration, relaxed definitions of differential privacy are often used which further tradeoff privacy for better utility.

We evaluated the impacts of these choices on privacy in experiments with logistic regression and neural network models, quantifying the privacy leakage in terms of advantage of the adversary performing inference attacks and by analyzing the number of members at risk for exposure.


Accuracy Loss as Privacy Decreases
(CIFAR-100, neural network model)

Privacy Leakage
(Yeom et al.’s Membership Inference Attack)

Our main findings are that current mechanisms for differential privacy for machine learning rarely offer acceptable utility-privacy tradeoffs: settings that provide limited accuracy loss provide little effective privacy, and settings that provide strong privacy result in useless models.

The table below shows the number of individuals, out of 10,000 members in the training set, exposed by a membership inference attack, given tolerance for false positives of 1% or 5% (and assuming a priori prevalence of 50% members). The key observations is that all the relaxtions provide lower utility (more accuracy loss) than naïve composition for comparable privacy leakage, as measured by the number of actual members exposed in a test dataset. Further, none of the methods provide both acceptable utility and meaningful privacy — at a high level, either nothing is learned from the training data, or some sensitive data is exposed. (See the paper for more details and results.)

 Naïve Composition Advanced Composition Zero Concentrated Rényi
Epsilon Loss 1% 5% Loss 1% 5% Loss 1% 5% Loss 1% 5%
0.1 0.95 0 0 0.95 0 0 0.94 0 0 0.93 0 0
1 0.94 0 0 0.94 0 0 0.92 0 6 0.91 0 94
10 0.94 0 0 0.87 0 1 0.81 0 20 0.80 0 109
100 0.93 0 0 0.61 1 32 0.49 30 281 0.48 11 202
1000 0.59 0 11 0.06 13 359 0.00 28 416 0.07 22 383
0.00 155 2667 No privacy noise added.

Bargav Jayaraman talked about this work at the DC-Area Anonymity, Privacy, and Security Seminar (25 February 2019) at the University of Maryland:

Paper: When Relaxations Go Bad: “Differentially-Private” Machine Learning
Code: https://github.com/bargavj/EvaluatingDPML)