no code implementations • 20 Oct 2024 • Abdul-Kazeem Shamba, Kerstin Bach, Gavin Taylor
We demonstrate that DynaCL embeds instances from time series into semantically meaningful clusters, which allows superior performance on downstream tasks on a variety of public time series datasets.
no code implementations • 3 Jan 2022 • Harrison Foley, Liam Fowl, Tom Goldstein, Gavin Taylor
Data poisoning for reinforcement learning has historically focused on general performance degradation, and targeted attacks have been successful via perturbations that involve control of the victim's policy and rewards.
1 code implementation • 5 Dec 2021 • Abdulmajid Murad, Frank Alexander Kraemer, Kerstin Bach, Gavin Taylor
Through extensive experiments, we describe training probabilistic models and evaluate their predictive uncertainties based on empirical performance, reliability of confidence estimate, and practical applicability.
no code implementations • ICLR 2021 • Valeriia Cherepanova, Micah Goldblum, Harrison Foley, Shiyuan Duan, John Dickerson, Gavin Taylor, Tom Goldstein
Facial recognition systems are increasingly deployed by private corporations, government agencies, and contractors for consumer services and mass surveillance programs alike.
3 code implementations • CVPR 2022 • Kezhi Kong, Guohao Li, Mucong Ding, Zuxuan Wu, Chen Zhu, Bernard Ghanem, Gavin Taylor, Tom Goldstein
Data augmentation helps neural networks generalize better by enlarging the training set, but it remains an open question how to effectively augment graph data to enhance the performance of GNNs (Graph Neural Networks).
Ranked #1 on Graph Property Prediction on ogbg-ppa
1 code implementation • 8 Oct 2020 • Abdulmajid Murad, Frank Alexander Kraemer, Kerstin Bach, Gavin Taylor
In order to make better use of deep reinforcement learning in the creation of sensing policies for resource-constrained IoT devices, we present and study a novel reward function based on the Fisher information value.
2 code implementations • ICLR 2021 • Jonas Geiping, Liam Fowl, W. Ronny Huang, Wojciech Czaja, Gavin Taylor, Michael Moeller, Tom Goldstein
We consider a particularly malicious poisoning attack that is both "from scratch" and "clean label", meaning we analyze an attack that successfully works against new, randomly initialized models, and is nearly imperceptible to humans, all while perturbing only a small fraction of the training data.
2 code implementations • NeurIPS 2020 • W. Ronny Huang, Jonas Geiping, Liam Fowl, Gavin Taylor, Tom Goldstein
Existing attacks for data poisoning neural networks have relied on hand-crafted heuristics, because solving the poisoning problem directly via bilevel optimization is generally thought of as intractable for deep models.
1 code implementation • 15 May 2019 • Chen Zhu, W. Ronny Huang, Ali Shafahi, Hengduo Li, Gavin Taylor, Christoph Studer, Tom Goldstein
Clean-label poisoning attacks inject innocuous looking (and "correctly" labeled) poison images into training data, causing a model to misclassify a targeted image after being trained on this data.
1 code implementation • 10 May 2019 • Abdulmajid Murad, Frank Alexander Kraemer, Kerstin Bach, Gavin Taylor
Reinforcement learning (RL) is capable of managing wireless, energy-harvesting IoT nodes by solving the problem of autonomous management in non-stationary, resource-constrained settings.
6 code implementations • NeurIPS 2019 • Ali Shafahi, Mahyar Najibi, Amin Ghiasi, Zheng Xu, John Dickerson, Christoph Studer, Larry S. Davis, Gavin Taylor, Tom Goldstein
Adversarial training, in which a network is trained on adversarial examples, is one of the few defenses against adversarial attacks that withstands strong attacks.
11 code implementations • ICLR 2018 • Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, Tom Goldstein
Neural network training relies on our ability to find "good" minimizers of highly non-convex loss functions.
no code implementations • ICML 2017 • Zheng Xu, Gavin Taylor, Hao Li, Mario Figueiredo, Xiaoming Yuan, Tom Goldstein
The alternating direction method of multipliers (ADMM) is commonly used for distributed model fitting problems, but its performance and reliability depend strongly on user-defined penalty parameters.
2 code implementations • 6 May 2016 • Gavin Taylor, Ryan Burmeister, Zheng Xu, Bharat Singh, Ankit Patel, Tom Goldstein
With the growing importance of large network models and enormous training datasets, GPUs have become increasingly necessary to train neural networks.
no code implementations • 5 Dec 2015 • Soham De, Gavin Taylor, Tom Goldstein
Variance reduction (VR) methods boost the performance of stochastic gradient descent (SGD) by enabling the use of larger, constant stepsizes and preserving linear convergence rates.
no code implementations • 15 Oct 2015 • Bharat Singh, Soham De, Yangmuzi Zhang, Thomas Goldstein, Gavin Taylor
In this paper, we attempt to overcome the two above problems by proposing an optimization method for training deep neural networks which uses learning rates which are both specific to each layer in the network and adaptive to the curvature of the function, increasing the learning rate at low curvature points.
no code implementations • 8 Apr 2015 • Tom Goldstein, Gavin Taylor, Kawika Barabin, Kent Sayre
Recent approaches to distributed model fitting rely heavily on consensus ADMM, where each node solves small sub-problems using only local data.
no code implementations • 16 Apr 2014 • Gavin Taylor, Connor Geer, David Piekut
Recent interest in the use of $L_1$ regularization in the use of value function approximation includes Petrik et al.'s introduction of $L_1$-Regularized Approximate Linear Programming (RALP).