no code implementations • 8 Aug 2022 • Neil Fendley, Cash Costello, Eric Nguyen, Gino Perrotta, Corey Lowman
Training reinforcement learning agents that continually learn across multiple environments is a challenging problem.
no code implementations • 11 Dec 2020 • Nathan Drenkow, Neil Fendley, Philippe Burlina
We present a technique that utilizes properties of random projections to characterize the behavior of clean and adversarial examples across a diverse set of subspaces.
no code implementations • 11 Dec 2020 • Nathan Drenkow, Philippe Burlina, Neil Fendley, Onyekachi Odoemene, Jared Markowitz
We interpret both detection problems through a probabilistic, Bayesian lens, whereby the objectness maps produced by our method serve as priors in a maximum-a-posteriori approach to the detection step.
no code implementations • 1 May 2020 • Neil Fendley, Max Lennon, I-Jeng Wang, Philippe Burlina, Nathan Drenkow
We focus on the development of effective adversarial patch attacks and -- for the first time -- jointly address the antagonistic objectives of attack success and obtrusiveness via the design of novel semi-transparent patches.
1 code implementation • 13 Mar 2020 • Kiran Karra, Chace Ashcraft, Neil Fendley
In this paper, we introduce the TrojAI software framework, an open source set of Python tools capable of generating triggered (poisoned) datasets and associated deep learning (DL) models with trojans at scale.
no code implementations • 28 May 2018 • Wojciech Czaja, Neil Fendley, Michael Pekala, Christopher Ratto, I-Jeng Wang
This paper considers attacks against machine learning algorithms used in remote sensing applications, a domain that presents a suite of challenges that are not fully addressed by current research focused on natural image data such as ImageNet.
7 code implementations • CVPR 2018 • Gordon Christie, Neil Fendley, James Wilson, Ryan Mukherjee
We present an analysis of the dataset along with baseline approaches that reason about metadata and temporal views.