no code implementations • 21 Aug 2023 • Leonard Berrada, Soham De, Judy Hanwen Shen, Jamie Hayes, Robert Stanforth, David Stutz, Pushmeet Kohli, Samuel L. Smith, Borja Balle
The poor performance of classifiers trained with DP has prevented the widespread adoption of privacy preserving machine learning in industry.
no code implementations • 27 Feb 2023 • Sahra Ghalebikesabi, Leonard Berrada, Sven Gowal, Ira Ktena, Robert Stanforth, Jamie Hayes, Soham De, Samuel L. Smith, Olivia Wiles, Borja Balle
By privately fine-tuning ImageNet pre-trained diffusion models with more than 80M parameters, we obtain SOTA results on CIFAR-10 and Camelyon17 in terms of both FID and the accuracy of downstream classifiers trained on synthetic data.
no code implementations • 15 Feb 2023 • Milad Nasr, Jamie Hayes, Thomas Steinke, Borja Balle, Florian Tramèr, Matthew Jagielski, Nicholas Carlini, Andreas Terzis
Moreover, our auditing scheme requires only two training runs (instead of thousands) to produce tight privacy estimates, by adapting recent advances in tight composition theorems for differential privacy.
1 code implementation • 30 Jan 2023 • Nicholas Carlini, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash Sehwag, Florian Tramèr, Borja Balle, Daphne Ippolito, Eric Wallace
Image diffusion models such as DALL-E 2, Imagen, and Stable Diffusion have attracted significant attention due to their ability to generate high-quality synthetic images.
1 code implementation • 28 Apr 2022 • Soham De, Leonard Berrada, Jamie Hayes, Samuel L. Smith, Borja Balle
Differential Privacy (DP) provides a formal privacy guarantee preventing adversaries with access to a machine learning model from extracting information about individual training points.
Classification
Image Classification with Differential Privacy
+1
2 code implementations • 13 Jan 2022 • Borja Balle, Giovanni Cherubin, Jamie Hayes
Our work provides an effective reconstruction attack that model developers can use to assess memorization of individual points in general settings beyond those considered in previous works (e. g. generative language models or access to training gradients); it shows that standard models have the capacity to store enough information to enable high-fidelity reconstruction of training data points; and it demonstrates that differential privacy can successfully mitigate such attacks in a parameter regime where utility degradation is minimal.
no code implementations • 6 Jan 2022 • Jamie Hayes, Borja Balle, M. Pawan Kumar
We study the difficulties in learning that arise from robust and differentially private optimization.
no code implementations • 8 Dec 2021 • Laura Weidinger, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, Mia Glaese, Borja Balle, Atoosa Kasirzadeh, Zac Kenton, Sasha Brown, Will Hawkins, Tom Stepleton, Courtney Biles, Abeba Birhane, Julia Haas, Laura Rimell, Lisa Anne Hendricks, William Isaac, Sean Legassick, Geoffrey Irving, Iason Gabriel
We discuss the points of origin of different risks and point to potential mitigation approaches.
no code implementations • 16 Feb 2021 • Hisham Husain, Borja Balle
Our result coincides with that conjectured in (Bubeck et al., 2020) for two-layer networks under the assumption of bounded weights.
no code implementations • 18 Sep 2020 • Giuseppe Vietri, Borja Balle, Akshay Krishnamurthy, Zhiwei Steven Wu
Motivated by high-stakes decision-making domains like personalized medicine where user information is inherently sensitive, we design privacy preserving exploration policies for episodic reinforcement learning (RL).
no code implementations • NeurIPS 2020 • Borja Balle, Peter Kairouz, H. Brendan McMahan, Om Thakkar, Abhradeep Thakurta
It has privacy/accuracy trade-offs similar to privacy amplification by subsampling/shuffling.
no code implementations • ICLR 2020 • Krishnamurthy (Dj) Dvijotham, Jamie Hayes, Borja Balle, Zico Kolter, Chongli Qin, Andras Gyorgy, Kai Xiao, Sven Gowal, Pushmeet Kohli
Formal verification techniques that compute provable guarantees on properties of machine learning models, like robustness to norm-bounded adversarial perturbations, have yielded impressive results.
1 code implementation • 20 Oct 2019 • Oluwaseyi Feyisetan, Borja Balle, Thomas Drake, Tom Diethe
We conduct privacy audit experiments against 2 baseline models and utility experiments on 3 datasets to demonstrate the tradeoff between privacy and utility for varying values of epsilon on different task types.
no code implementations • 14 Oct 2019 • Jonathan Lebensold, William Hamilton, Borja Balle, Doina Precup
Reinforcement learning algorithms are known to be sample inefficient, and often performance on one task can be substantially improved by leveraging information (e. g., via pre-training) on other related tasks.
1 code implementation • 20 Jun 2019 • Borja Balle, James Bell, Adria Gascon, Kobbi Nissim
In recent work, Cheu et al. (Eurocrypt 2019) proposed a protocol for $n$-party real summation in the shuffle model of differential privacy with $O_{\epsilon, \delta}(1)$ error and $\Theta(\epsilon\sqrt{n})$ one-bit messages per party.
no code implementations • NeurIPS 2019 • Borja Balle, Gilles Barthe, Marco Gaboardi, Joseph Geumlek
A fundamental result in differential privacy states that the privacy guarantees of a mechanism are preserved by any post-processing of its output.
1 code implementation • 27 May 2019 • Amir-Hossein Karimi, Gilles Barthe, Borja Balle, Isabel Valera
Predictive models are being increasingly used to support consequential decision making at the individual level in contexts such as pretrial bail and loan approval.
1 code implementation • 26 May 2019 • Brendan Avent, Javier Gonzalez, Tom Diethe, Andrei Paleyes, Borja Balle
Differential privacy is a mathematical framework for privacy-preserving data analysis.
no code implementations • 24 May 2019 • Borja Balle, Gilles Barthe, Marco Gaboardi, Justin Hsu, Tetsuya Sato
These conditions are useful to analyze the distinguishability power of divergences and we use them to study the hypothesis testing interpretation of some relaxations of differential privacy based on Renyi divergence.
no code implementations • 26 Mar 2019 • Oluwaseyi Feyisetan, Thomas Drake, Borja Balle, Tom Diethe
Active learning holds promise of significantly reducing data annotation costs while maintaining reasonable model performance.
no code implementations • 12 Mar 2019 • Tom Diethe, Tom Borchert, Eno Thereska, Borja Balle, Neil Lawrence
This paper describes a reference architecture for self-maintaining systems that can learn continually, as data arrives.
1 code implementation • 7 Mar 2019 • Borja Balle, James Bell, Adria Gascon, Kobbi Nissim
Additionally, Erlingsson et al. (SODA 2019) provide a privacy amplification bound quantifying the level of curator differential privacy achieved by the shuffle model in terms of the local differential privacy of the randomizer used by each user.
1 code implementation • NeurIPS 2017 • Matteo Ruffini, Guillaume Rabusseau, Borja Balle
Spectral methods of moments provide a powerful tool for learning the parameters of latent variable models.
1 code implementation • 31 Jul 2018 • Yu-Xiang Wang, Borja Balle, Shiva Kasiviswanathan
We study the problem of subsampling in differential privacy (DP), a question that is the centerpiece behind many successful differentially private machine learning algorithms.
no code implementations • NeurIPS 2018 • Borja Balle, Gilles Barthe, Marco Gaboardi
Differential privacy comes equipped with multiple analytical tools for the design of private data analyses.
1 code implementation • ICML 2018 • Borja Balle, Yu-Xiang Wang
The Gaussian mechanism is an essential building block used in multitude of differentially private data analysis algorithms.
no code implementations • NeurIPS 2017 • Guillaume Rabusseau, Borja Balle, Joelle Pineau
We first present a natural notion of relatedness between WFAs by considering to which extent several WFAs can share a common underlying representation.
no code implementations • ICML 2017 • Borja Balle, Odalric-Ambrym Maillard
We present spectral methods of moments for learning sequential models from a single trajectory, in stark contrast with the classical literature that assumes the availability of multiple i. i. d.
no code implementations • 25 Oct 2016 • Borja Balle, Mehryar Mohri
We present new data-dependent generalization guarantees for learning weighted automata expressed in terms of the Rademacher complexity of these families.
no code implementations • 7 Mar 2016 • Borja Balle, Maziar Gomrokchi, Doina Precup
We present the first differentially private algorithms for reinforcement learning, which apply to the task of evaluating a fixed policy.
no code implementations • 4 Nov 2015 • Guillaume Rabusseau, Borja Balle, Shay B. Cohen
We describe a technique to minimize weighted tree automata (WTA), a powerful formalisms that subsumes probabilistic context-free grammars (PCFGs) and latent-variable PCFGs.