When Relaxations Go Bad: "Differentially-Private" Machine Learning

We have posted a paper by Bargav Jayaraman and myself on When Relaxations Go Bad: “Differentially-Private” Machine Learning (code available at https://github.com/bargavj/EvaluatingDPML). Differential privacy is becoming a standard notion for performing privacy-preserving machine learning over sensitive data. It provides formal guarantees, in terms of the privacy budget, ε, on how much information about individual training records is leaked by the model. While the privacy budget is directly correlated to the privacy leakage, the calibration of the privacy budget is not well understood.

Read More…

NeurIPS 2018: Distributed Learning without Distress

Bargav Jayaraman presented our work on privacy-preserving machine learning at the 32nd Conference on Neural Information Processing Systems (NeurIPS 2018) in Montreal. Distributed learning (sometimes known as federated learning) allows a group of independent data owners to collaboratively learn a model over their data sets without exposing their private data. Our approach combines differential privacy with secure multi-party computation to both protect the data during training and produce a model that provides privacy against inference attacks.

Read More…

All Posts by Category or Tags.