Search Results for author: Liam Fowl

Found 24 papers, 18 papers with code

Exploring Sequence-to-Sequence Transformer-Transducer Models for Keyword Spotting

no code implementations11 Nov 2022 Beltrán Labrador, Guanlong Zhao, Ignacio López Moreno, Angelo Scorza Scarpati, Liam Fowl, Quan Wang

In this paper, we present a novel approach to adapt a sequence-to-sequence Transformer-Transducer ASR system to the keyword spotting (KWS) task.

Keyword Spotting

Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated Learning

1 code implementation17 Oct 2022 Yuxin Wen, Jonas Geiping, Liam Fowl, Hossein Souri, Rama Chellappa, Micah Goldblum, Tom Goldstein

Federated learning is particularly susceptible to model poisoning and backdoor attacks because individual users have direct control over the training data and model updates.

Federated Learning Image Classification +2

Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification

1 code implementation1 Feb 2022 Yuxin Wen, Jonas Geiping, Liam Fowl, Micah Goldblum, Tom Goldstein

Federated learning (FL) has rapidly risen in popularity due to its promise of privacy and efficiency.

Federated Learning

Execute Order 66: Targeted Data Poisoning for Reinforcement Learning

no code implementations3 Jan 2022 Harrison Foley, Liam Fowl, Tom Goldstein, Gavin Taylor

Data poisoning for reinforcement learning has historically focused on general performance degradation, and targeted attacks have been successful via perturbations that involve control of the victim's policy and rewards.

Atari Games Data Poisoning +2

Adversarial Examples Make Strong Poisons

2 code implementations NeurIPS 2021 Liam Fowl, Micah Goldblum, Ping-Yeh Chiang, Jonas Geiping, Wojtek Czaja, Tom Goldstein

The adversarial machine learning literature is largely partitioned into evasion attacks on testing data and poisoning attacks on training data.

Data Poisoning

Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch

1 code implementation16 Jun 2021 Hossein Souri, Liam Fowl, Rama Chellappa, Micah Goldblum, Tom Goldstein

In contrast, the Hidden Trigger Backdoor Attack achieves poisoning without placing a trigger into the training data at all.

Backdoor Attack

DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data Augmentations

1 code implementation2 Mar 2021 Eitan Borgnia, Jonas Geiping, Valeriia Cherepanova, Liam Fowl, Arjun Gupta, Amin Ghiasi, Furong Huang, Micah Goldblum, Tom Goldstein

The InstaHide method has recently been proposed as an alternative to DP training that leverages supposed privacy properties of the mixup augmentation, although without rigorous guarantees.

Data Poisoning

What Doesn't Kill You Makes You Robust(er): How to Adversarially Train against Data Poisoning

1 code implementation26 Feb 2021 Jonas Geiping, Liam Fowl, Gowthami Somepalli, Micah Goldblum, Michael Moeller, Tom Goldstein

Data poisoning is a threat model in which a malicious actor tampers with training data to manipulate outcomes at inference time.

Data Poisoning

Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching

2 code implementations ICLR 2021 Jonas Geiping, Liam Fowl, W. Ronny Huang, Wojciech Czaja, Gavin Taylor, Michael Moeller, Tom Goldstein

We consider a particularly malicious poisoning attack that is both "from scratch" and "clean label", meaning we analyze an attack that successfully works against new, randomly initialized models, and is nearly imperceptible to humans, all while perturbing only a small fraction of the training data.

Data Poisoning

MetaPoison: Practical General-purpose Clean-label Data Poisoning

2 code implementations NeurIPS 2020 W. Ronny Huang, Jonas Geiping, Liam Fowl, Gavin Taylor, Tom Goldstein

Existing attacks for data poisoning neural networks have relied on hand-crafted heuristics, because solving the poisoning problem directly via bilevel optimization is generally thought of as intractable for deep models.

AutoML Bilevel Optimization +2

Adversarially Robust Few-Shot Learning: A Meta-Learning Approach

1 code implementation NeurIPS 2020 Micah Goldblum, Liam Fowl, Tom Goldstein

Previous work on adversarially robust neural networks for image classification requires large training sets and computationally expensive training procedures.

Classification Few-Shot Image Classification +3

Deep k-NN Defense against Clean-label Data Poisoning Attacks

1 code implementation29 Sep 2019 Neehar Peri, Neal Gupta, W. Ronny Huang, Liam Fowl, Chen Zhu, Soheil Feizi, Tom Goldstein, John P. Dickerson

Targeted clean-label data poisoning is a type of adversarial attack on machine learning systems in which an adversary injects a few correctly-labeled, minimally-perturbed samples into the training data, causing a model to misclassify a particular test sample during inference.

Adversarial Attack Data Poisoning

Understanding Generalization through Visualizations

2 code implementations NeurIPS Workshop ICBINB 2020 W. Ronny Huang, Zeyad Emam, Micah Goldblum, Liam Fowl, Justin K. Terry, Furong Huang, Tom Goldstein

The power of neural networks lies in their ability to generalize to unseen data, yet the underlying reasons for this phenomenon remain elusive.

Adversarially Robust Distillation

2 code implementations23 May 2019 Micah Goldblum, Liam Fowl, Soheil Feizi, Tom Goldstein

In addition to producing small models with high test accuracy like conventional distillation, ARD also passes the superior robustness of large networks onto the student.

Adversarial Robustness Knowledge Distillation

Cannot find the paper you are looking for? You can Submit a new open access paper.