Search Results for author: Jamie Hayes

Found 25 papers, 9 papers with code

Buffer Overflow in Mixture of Experts

no code implementations8 Feb 2024 Jamie Hayes, Ilia Shumailov, Itay Yona

Mixture of Experts (MoE) has become a key ingredient for scaling large foundation models while keeping inference costs steady.

Unlocking Accuracy and Fairness in Differentially Private Image Classification

2 code implementations21 Aug 2023 Leonard Berrada, Soham De, Judy Hanwen Shen, Jamie Hayes, Robert Stanforth, David Stutz, Pushmeet Kohli, Samuel L. Smith, Borja Balle

The poor performance of classifiers trained with DP has prevented the widespread adoption of privacy preserving machine learning in industry.

Classification Fairness +2

Bounding data reconstruction attacks with the hypothesis testing interpretation of differential privacy

no code implementations8 Jul 2023 Georgios Kaissis, Jamie Hayes, Alexander Ziller, Daniel Rueckert

We explore Reconstruction Robustness (ReRo), which was recently proposed as an upper bound on the success of data reconstruction attacks against machine learning models.

Differentially Private Diffusion Models Generate Useful Synthetic Images

no code implementations27 Feb 2023 Sahra Ghalebikesabi, Leonard Berrada, Sven Gowal, Ira Ktena, Robert Stanforth, Jamie Hayes, Soham De, Samuel L. Smith, Olivia Wiles, Borja Balle

By privately fine-tuning ImageNet pre-trained diffusion models with more than 80M parameters, we obtain SOTA results on CIFAR-10 and Camelyon17 in terms of both FID and the accuracy of downstream classifiers trained on synthetic data.

Image Generation Privacy Preserving

Towards Unbounded Machine Unlearning

1 code implementation NeurIPS 2023 Meghdad Kurmanji, Peter Triantafillou, Jamie Hayes, Eleni Triantafillou

This paper is the first, to our knowledge, to study unlearning for different applications (RB, RC, UP), with the view that each has its own desiderata, definitions for `forgetting' and associated metrics for forget quality.

Inference Attack Machine Unlearning +1

Tight Auditing of Differentially Private Machine Learning

no code implementations15 Feb 2023 Milad Nasr, Jamie Hayes, Thomas Steinke, Borja Balle, Florian Tramèr, Matthew Jagielski, Nicholas Carlini, Andreas Terzis

Moreover, our auditing scheme requires only two training runs (instead of thousands) to produce tight privacy estimates, by adapting recent advances in tight composition theorems for differential privacy.

Federated Learning

Extracting Training Data from Diffusion Models

no code implementations30 Jan 2023 Nicholas Carlini, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash Sehwag, Florian Tramèr, Borja Balle, Daphne Ippolito, Eric Wallace

Image diffusion models such as DALL-E 2, Imagen, and Stable Diffusion have attracted significant attention due to their ability to generate high-quality synthetic images.

Privacy Preserving

Unlocking High-Accuracy Differentially Private Image Classification through Scale

2 code implementations28 Apr 2022 Soham De, Leonard Berrada, Jamie Hayes, Samuel L. Smith, Borja Balle

Differential Privacy (DP) provides a formal privacy guarantee preventing adversaries with access to a machine learning model from extracting information about individual training points.

Classification Image Classification with Differential Privacy +1

Reconstructing Training Data with Informed Adversaries

2 code implementations13 Jan 2022 Borja Balle, Giovanni Cherubin, Jamie Hayes

Our work provides an effective reconstruction attack that model developers can use to assess memorization of individual points in general settings beyond those considered in previous works (e. g. generative language models or access to training gradients); it shows that standard models have the capacity to store enough information to enable high-fidelity reconstruction of training data points; and it demonstrates that differential privacy can successfully mitigate such attacks in a parameter regime where utility degradation is minimal.

Memorization Reconstruction Attack

Learning to be adversarially robust and differentially private

no code implementations6 Jan 2022 Jamie Hayes, Borja Balle, M. Pawan Kumar

We study the difficulties in learning that arise from robust and differentially private optimization.

Binary Classification

Towards transformation-resilient provenance detection of digital media

no code implementations14 Nov 2020 Jamie Hayes, Krishnamurthy, Dvijotham, Yutian Chen, Sander Dieleman, Pushmeet Kohli, Norman Casagrande

In this paper, we introduce ReSWAT (Resilient Signal Watermarking via Adversarial Training), a framework for learning transformation-resilient watermark detectors that are able to detect a watermark even after a signal has been through several post-processing transformations.

Adaptive Webpage Fingerprinting from TLS Traces

no code implementations19 Oct 2020 Vasilios Mavroudis, Jamie Hayes

In webpage fingerprinting, an on-path adversary infers the specific webpage loaded by a victim user by analysing the patterns in the encrypted TLS traffic exchanged between the user's browser and the website's servers.

Local and Central Differential Privacy for Robustness and Privacy in Federated Learning

no code implementations8 Sep 2020 Mohammad Naseri, Jamie Hayes, Emiliano De Cristofaro

This paper investigates whether and to what extent one can use differential Privacy (DP) to protect both privacy and robustness in FL.

Federated Learning

Trade-offs between membership privacy & adversarially robust learning

no code implementations8 Jun 2020 Jamie Hayes

Consequently, an abundance of research has been devoted to designing machine learning methods that are robust to adversarial examples.

BIG-bench Machine Learning Fairness +3

Extensions and limitations of randomized smoothing for robustness guarantees

no code implementations7 Jun 2020 Jamie Hayes

Randomized smoothing, a method to certify a classifier's decision on an input is invariant under adversarial noise, offers attractive advantages over other certification methods.

Unique properties of adversarially trained linear classifiers on Gaussian data

no code implementations6 Jun 2020 Jamie Hayes

Machine learning models are vulnerable to adversarial perturbations, that when added to an input, can cause high confidence misclassifications.

BIG-bench Machine Learning Binary Classification +1

A FRAMEWORK FOR ROBUSTNESS CERTIFICATION OF SMOOTHED CLASSIFIERS USING F-DIVERGENCES

no code implementations ICLR 2020 Krishnamurthy (Dj) Dvijotham, Jamie Hayes, Borja Balle, Zico Kolter, Chongli Qin, Andras Gyorgy, Kai Xiao, Sven Gowal, Pushmeet Kohli

Formal verification techniques that compute provable guarantees on properties of machine learning models, like robustness to norm-bounded adversarial perturbations, have yielded impressive results.

Audio Classification BIG-bench Machine Learning +1

Provenance detection through learning transformation-resilient watermarking

no code implementations25 Sep 2019 Jamie Hayes, Krishnamurthy Dvijotham, Yutian Chen, Sander Dieleman, Pushmeet Kohli, Norman Casagrande

In this paper, we introduce ReSWAT (Resilient Signal Watermarking via Adversarial Training), a framework for learning transformation-resilient watermark detectors that are able to detect a watermark even after a signal has been through several post-processing transformations.

Contamination Attacks and Mitigation in Multi-Party Machine Learning

no code implementations NeurIPS 2018 Jamie Hayes, Olga Ohrimenko

Machine learning is data hungry; the more data a model has access to in training, the more likely it is to perform well at inference time.

BIG-bench Machine Learning

A note on hyperparameters in black-box adversarial examples

1 code implementation15 Nov 2018 Jamie Hayes

Black-box attacks assume no knowledge of the model weights or architecture.

Evading classifiers in discrete domains with provable optimality guarantees

2 code implementations25 Oct 2018 Bogdan Kulynych, Jamie Hayes, Nikita Samarin, Carmela Troncoso

We introduce a graphical framework that (1) generalizes existing attacks in discrete domains, (2) can accommodate complex cost functions beyond $p$-norms, including financial cost incurred when attacking a classifier, and (3) efficiently produces valid adversarial examples with guarantees of minimal adversarial cost.

Adversarial Robustness Spam detection +2

Learning Universal Adversarial Perturbations with Generative Models

1 code implementation17 Aug 2017 Jamie Hayes, George Danezis

Neural networks are known to be vulnerable to adversarial examples, inputs that have been intentionally perturbed to remain visually similar to the source input, but cause a misclassification.

Ranked #8 on Graph Classification on NCI1 (using extra training data)

Graph Classification

LOGAN: Membership Inference Attacks Against Generative Models

1 code implementation22 May 2017 Jamie Hayes, Luca Melis, George Danezis, Emiliano De Cristofaro

Generative models estimate the underlying distribution of a dataset to generate realistic samples according to that distribution.

Generating Steganographic Images via Adversarial Training

1 code implementation NeurIPS 2017 Jamie Hayes, George Danezis

In this paper, we apply adversarial training techniques to the discriminative task of learning a steganographic algorithm.

Image Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.