Search Results for author: Borja Balle

Found 32 papers, 11 papers with code

Unlocking Accuracy and Fairness in Differentially Private Image Classification

no code implementations21 Aug 2023 Leonard Berrada, Soham De, Judy Hanwen Shen, Jamie Hayes, Robert Stanforth, David Stutz, Pushmeet Kohli, Samuel L. Smith, Borja Balle

The poor performance of classifiers trained with DP has prevented the widespread adoption of privacy preserving machine learning in industry.

Classification Fairness +2

Differentially Private Diffusion Models Generate Useful Synthetic Images

no code implementations27 Feb 2023 Sahra Ghalebikesabi, Leonard Berrada, Sven Gowal, Ira Ktena, Robert Stanforth, Jamie Hayes, Soham De, Samuel L. Smith, Olivia Wiles, Borja Balle

By privately fine-tuning ImageNet pre-trained diffusion models with more than 80M parameters, we obtain SOTA results on CIFAR-10 and Camelyon17 in terms of both FID and the accuracy of downstream classifiers trained on synthetic data.

Image Generation Privacy Preserving

Tight Auditing of Differentially Private Machine Learning

no code implementations15 Feb 2023 Milad Nasr, Jamie Hayes, Thomas Steinke, Borja Balle, Florian Tramèr, Matthew Jagielski, Nicholas Carlini, Andreas Terzis

Moreover, our auditing scheme requires only two training runs (instead of thousands) to produce tight privacy estimates, by adapting recent advances in tight composition theorems for differential privacy.

Federated Learning

Extracting Training Data from Diffusion Models

1 code implementation30 Jan 2023 Nicholas Carlini, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash Sehwag, Florian Tramèr, Borja Balle, Daphne Ippolito, Eric Wallace

Image diffusion models such as DALL-E 2, Imagen, and Stable Diffusion have attracted significant attention due to their ability to generate high-quality synthetic images.

Privacy Preserving

Unlocking High-Accuracy Differentially Private Image Classification through Scale

1 code implementation28 Apr 2022 Soham De, Leonard Berrada, Jamie Hayes, Samuel L. Smith, Borja Balle

Differential Privacy (DP) provides a formal privacy guarantee preventing adversaries with access to a machine learning model from extracting information about individual training points.

Classification Image Classification with Differential Privacy +1

Reconstructing Training Data with Informed Adversaries

2 code implementations13 Jan 2022 Borja Balle, Giovanni Cherubin, Jamie Hayes

Our work provides an effective reconstruction attack that model developers can use to assess memorization of individual points in general settings beyond those considered in previous works (e. g. generative language models or access to training gradients); it shows that standard models have the capacity to store enough information to enable high-fidelity reconstruction of training data points; and it demonstrates that differential privacy can successfully mitigate such attacks in a parameter regime where utility degradation is minimal.

Memorization Reconstruction Attack

Learning to be adversarially robust and differentially private

no code implementations6 Jan 2022 Jamie Hayes, Borja Balle, M. Pawan Kumar

We study the difficulties in learning that arise from robust and differentially private optimization.

Binary Classification

A Law of Robustness for Weight-bounded Neural Networks

no code implementations16 Feb 2021 Hisham Husain, Borja Balle

Our result coincides with that conjectured in (Bubeck et al., 2020) for two-layer networks under the assumption of bounded weights.

Private Reinforcement Learning with PAC and Regret Guarantees

no code implementations18 Sep 2020 Giuseppe Vietri, Borja Balle, Akshay Krishnamurthy, Zhiwei Steven Wu

Motivated by high-stakes decision-making domains like personalized medicine where user information is inherently sensitive, we design privacy preserving exploration policies for episodic reinforcement learning (RL).

Decision Making Privacy Preserving +2


no code implementations ICLR 2020 Krishnamurthy (Dj) Dvijotham, Jamie Hayes, Borja Balle, Zico Kolter, Chongli Qin, Andras Gyorgy, Kai Xiao, Sven Gowal, Pushmeet Kohli

Formal verification techniques that compute provable guarantees on properties of machine learning models, like robustness to norm-bounded adversarial perturbations, have yielded impressive results.

Audio Classification BIG-bench Machine Learning +1

Privacy- and Utility-Preserving Textual Analysis via Calibrated Multivariate Perturbations

1 code implementation20 Oct 2019 Oluwaseyi Feyisetan, Borja Balle, Thomas Drake, Tom Diethe

We conduct privacy audit experiments against 2 baseline models and utility experiments on 3 datasets to demonstrate the tradeoff between privacy and utility for varying values of epsilon on different task types.

Privacy Preserving

Actor Critic with Differentially Private Critic

no code implementations14 Oct 2019 Jonathan Lebensold, William Hamilton, Borja Balle, Doina Precup

Reinforcement learning algorithms are known to be sample inefficient, and often performance on one task can be substantially improved by leveraging information (e. g., via pre-training) on other related tasks.

reinforcement-learning Reinforcement Learning (RL) +1

Differentially Private Summation with Multi-Message Shuffling

1 code implementation20 Jun 2019 Borja Balle, James Bell, Adria Gascon, Kobbi Nissim

In recent work, Cheu et al. (Eurocrypt 2019) proposed a protocol for $n$-party real summation in the shuffle model of differential privacy with $O_{\epsilon, \delta}(1)$ error and $\Theta(\epsilon\sqrt{n})$ one-bit messages per party.

Privacy Amplification by Mixing and Diffusion Mechanisms

no code implementations NeurIPS 2019 Borja Balle, Gilles Barthe, Marco Gaboardi, Joseph Geumlek

A fundamental result in differential privacy states that the privacy guarantees of a mechanism are preserved by any post-processing of its output.

Model-Agnostic Counterfactual Explanations for Consequential Decisions

1 code implementation27 May 2019 Amir-Hossein Karimi, Gilles Barthe, Borja Balle, Isabel Valera

Predictive models are being increasingly used to support consequential decision making at the individual level in contexts such as pretrial bail and loan approval.

counterfactual Decision Making

Hypothesis Testing Interpretations and Renyi Differential Privacy

no code implementations24 May 2019 Borja Balle, Gilles Barthe, Marco Gaboardi, Justin Hsu, Tetsuya Sato

These conditions are useful to analyze the distinguishability power of divergences and we use them to study the hypothesis testing interpretation of some relaxations of differential privacy based on Renyi divergence.

Test Two-sample testing

Privacy-preserving Active Learning on Sensitive Data for User Intent Classification

no code implementations26 Mar 2019 Oluwaseyi Feyisetan, Thomas Drake, Borja Balle, Tom Diethe

Active learning holds promise of significantly reducing data annotation costs while maintaining reasonable model performance.

Active Learning Binary Classification +4

Continual Learning in Practice

no code implementations12 Mar 2019 Tom Diethe, Tom Borchert, Eno Thereska, Borja Balle, Neil Lawrence

This paper describes a reference architecture for self-maintaining systems that can learn continually, as data arrives.

AutoML BIG-bench Machine Learning +1

The Privacy Blanket of the Shuffle Model

1 code implementation7 Mar 2019 Borja Balle, James Bell, Adria Gascon, Kobbi Nissim

Additionally, Erlingsson et al. (SODA 2019) provide a privacy amplification bound quantifying the level of curator differential privacy achieved by the shuffle model in terms of the local differential privacy of the randomizer used by each user.

Hierarchical Methods of Moments

1 code implementation NeurIPS 2017 Matteo Ruffini, Guillaume Rabusseau, Borja Balle

Spectral methods of moments provide a powerful tool for learning the parameters of latent variable models.

Tensor Decomposition

Subsampled Rényi Differential Privacy and Analytical Moments Accountant

1 code implementation31 Jul 2018 Yu-Xiang Wang, Borja Balle, Shiva Kasiviswanathan

We study the problem of subsampling in differential privacy (DP), a question that is the centerpiece behind many successful differentially private machine learning algorithms.

BIG-bench Machine Learning

Privacy Amplification by Subsampling: Tight Analyses via Couplings and Divergences

no code implementations NeurIPS 2018 Borja Balle, Gilles Barthe, Marco Gaboardi

Differential privacy comes equipped with multiple analytical tools for the design of private data analyses.

Improving the Gaussian Mechanism for Differential Privacy: Analytical Calibration and Optimal Denoising

1 code implementation ICML 2018 Borja Balle, Yu-Xiang Wang

The Gaussian mechanism is an essential building block used in multitude of differentially private data analysis algorithms.


Multitask Spectral Learning of Weighted Automata

no code implementations NeurIPS 2017 Guillaume Rabusseau, Borja Balle, Joelle Pineau

We first present a natural notion of relatedness between WFAs by considering to which extent several WFAs can share a common underlying representation.

Spectral Learning from a Single Trajectory under Finite-State Policies

no code implementations ICML 2017 Borja Balle, Odalric-Ambrym Maillard

We present spectral methods of moments for learning sequential models from a single trajectory, in stark contrast with the classical literature that assumes the availability of multiple i. i. d.

Generalization Bounds for Weighted Automata

no code implementations25 Oct 2016 Borja Balle, Mehryar Mohri

We present new data-dependent generalization guarantees for learning weighted automata expressed in terms of the Rademacher complexity of these families.

Generalization Bounds

Differentially Private Policy Evaluation

no code implementations7 Mar 2016 Borja Balle, Maziar Gomrokchi, Doina Precup

We present the first differentially private algorithms for reinforcement learning, which apply to the task of evaluating a fixed policy.

reinforcement-learning Reinforcement Learning (RL)

Low-Rank Approximation of Weighted Tree Automata

no code implementations4 Nov 2015 Guillaume Rabusseau, Borja Balle, Shay B. Cohen

We describe a technique to minimize weighted tree automata (WTA), a powerful formalisms that subsumes probabilistic context-free grammars (PCFGs) and latent-variable PCFGs.

Cannot find the paper you are looking for? You can Submit a new open access paper.