Search Results for author: Gavin Taylor

Found 17 papers, 10 papers with code

Execute Order 66: Targeted Data Poisoning for Reinforcement Learning

no code implementations3 Jan 2022 Harrison Foley, Liam Fowl, Tom Goldstein, Gavin Taylor

Data poisoning for reinforcement learning has historically focused on general performance degradation, and targeted attacks have been successful via perturbations that involve control of the victim's policy and rewards.

Atari Games Data Poisoning +1

Probabilistic Deep Learning to Quantify Uncertainty in Air Quality Forecasting

1 code implementation5 Dec 2021 Abdulmajid Murad, Frank Alexander Kraemer, Kerstin Bach, Gavin Taylor

Through extensive experiments, we describe training probabilistic models and evaluate their predictive uncertainties based on empirical performance, reliability of confidence estimate, and practical applicability.

Probabilistic Deep Learning

LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition

no code implementations ICLR 2021 Valeriia Cherepanova, Micah Goldblum, Harrison Foley, Shiyuan Duan, John Dickerson, Gavin Taylor, Tom Goldstein

Facial recognition systems are increasingly deployed by private corporations, government agencies, and contractors for consumer services and mass surveillance programs alike.

Face Detection Face Recognition

Robust Optimization as Data Augmentation for Large-scale Graphs

3 code implementations CVPR 2022 Kezhi Kong, Guohao Li, Mucong Ding, Zuxuan Wu, Chen Zhu, Bernard Ghanem, Gavin Taylor, Tom Goldstein

Data augmentation helps neural networks generalize better by enlarging the training set, but it remains an open question how to effectively augment graph data to enhance the performance of GNNs (Graph Neural Networks).

Data Augmentation Graph Classification +3

Information-Driven Adaptive Sensing Based on Deep Reinforcement Learning

1 code implementation8 Oct 2020 Abdulmajid Murad, Frank Alexander Kraemer, Kerstin Bach, Gavin Taylor

In order to make better use of deep reinforcement learning in the creation of sensing policies for resource-constrained IoT devices, we present and study a novel reward function based on the Fisher information value.


Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching

1 code implementation ICLR 2021 Jonas Geiping, Liam Fowl, W. Ronny Huang, Wojciech Czaja, Gavin Taylor, Michael Moeller, Tom Goldstein

We consider a particularly malicious poisoning attack that is both "from scratch" and "clean label", meaning we analyze an attack that successfully works against new, randomly initialized models, and is nearly imperceptible to humans, all while perturbing only a small fraction of the training data.

Data Poisoning

MetaPoison: Practical General-purpose Clean-label Data Poisoning

2 code implementations NeurIPS 2020 W. Ronny Huang, Jonas Geiping, Liam Fowl, Gavin Taylor, Tom Goldstein

Existing attacks for data poisoning neural networks have relied on hand-crafted heuristics, because solving the poisoning problem directly via bilevel optimization is generally thought of as intractable for deep models.

AutoML Bilevel Optimization +2

Transferable Clean-Label Poisoning Attacks on Deep Neural Nets

1 code implementation15 May 2019 Chen Zhu, W. Ronny Huang, Ali Shafahi, Hengduo Li, Gavin Taylor, Christoph Studer, Tom Goldstein

Clean-label poisoning attacks inject innocuous looking (and "correctly" labeled) poison images into training data, causing a model to misclassify a targeted image after being trained on this data.

Transfer Learning

Autonomous Management of Energy-Harvesting IoT Nodes Using Deep Reinforcement Learning

1 code implementation10 May 2019 Abdulmajid Murad, Frank Alexander Kraemer, Kerstin Bach, Gavin Taylor

Reinforcement learning (RL) is capable of managing wireless, energy-harvesting IoT nodes by solving the problem of autonomous management in non-stationary, resource-constrained settings.


Visualizing the Loss Landscape of Neural Nets

7 code implementations ICLR 2018 Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, Tom Goldstein

Neural network training relies on our ability to find "good" minimizers of highly non-convex loss functions.

Adaptive Consensus ADMM for Distributed Optimization

no code implementations ICML 2017 Zheng Xu, Gavin Taylor, Hao Li, Mario Figueiredo, Xiaoming Yuan, Tom Goldstein

The alternating direction method of multipliers (ADMM) is commonly used for distributed model fitting problems, but its performance and reliability depend strongly on user-defined penalty parameters.

Distributed Optimization

Training Neural Networks Without Gradients: A Scalable ADMM Approach

2 code implementations6 May 2016 Gavin Taylor, Ryan Burmeister, Zheng Xu, Bharat Singh, Ankit Patel, Tom Goldstein

With the growing importance of large network models and enormous training datasets, GPUs have become increasingly necessary to train neural networks.

Variance Reduction for Distributed Stochastic Gradient Descent

no code implementations5 Dec 2015 Soham De, Gavin Taylor, Tom Goldstein

Variance reduction (VR) methods boost the performance of stochastic gradient descent (SGD) by enabling the use of larger, constant stepsizes and preserving linear convergence rates.

Stochastic Optimization

Layer-Specific Adaptive Learning Rates for Deep Networks

no code implementations15 Oct 2015 Bharat Singh, Soham De, Yangmuzi Zhang, Thomas Goldstein, Gavin Taylor

In this paper, we attempt to overcome the two above problems by proposing an optimization method for training deep neural networks which uses learning rates which are both specific to each layer in the network and adaptive to the curvature of the function, increasing the learning rate at low curvature points.

Image Classification

Unwrapping ADMM: Efficient Distributed Computing via Transpose Reduction

no code implementations8 Apr 2015 Tom Goldstein, Gavin Taylor, Kawika Barabin, Kent Sayre

Recent approaches to distributed model fitting rely heavily on consensus ADMM, where each node solves small sub-problems using only local data.

Distributed Computing

An Analysis of State-Relevance Weights and Sampling Distributions on L1-Regularized Approximate Linear Programming Approximation Accuracy

no code implementations16 Apr 2014 Gavin Taylor, Connor Geer, David Piekut

Recent interest in the use of $L_1$ regularization in the use of value function approximation includes Petrik et al.'s introduction of $L_1$-Regularized Approximate Linear Programming (RALP).

Cannot find the paper you are looking for? You can Submit a new open access paper.