Search Results for author: Nathan Drenkow

Found 8 papers, 1 papers with code

Context-Adaptive Deep Neural Networks via Bridge-Mode Connectivity

no code implementations28 Nov 2022 Nathan Drenkow, Alvin Tan, Chace Ashcraft, Kiran Karra

The deployment of machine learning models in safety-critical applications comes with the expectation that such models will perform well over a range of contexts (e. g., a vision model for classifying street signs should work in rural, city, and highway settings under varying lighting/weather conditions).

Image Classification

A Systematic Review of Robustness in Deep Learning for Computer Vision: Mind the gap?

no code implementations1 Dec 2021 Nathan Drenkow, Numair Sani, Ilya Shpitser, Mathias Unberath

We find this area of research has received disproportionately less attention relative to adversarial machine learning, yet a significant robustness gap exists that manifests in performance degradation similar in magnitude to adversarial conditions.

Adversarial Robustness Data Augmentation

On the Sins of Image Synthesis Loss for Self-supervised Depth Estimation

no code implementations13 Sep 2021 Zhaoshuo Li, Nathan Drenkow, Hao Ding, Andy S. Ding, Alexander Lu, Francis X. Creighton, Russell H. Taylor, Mathias Unberath

It is based on the idea that observed frames can be synthesized from neighboring frames if accurate depth of the scene is known - or in this case, estimated.

Depth Estimation Image Generation +2

Patch Attack Invariance: How Sensitive are Patch Attacks to 3D Pose?

no code implementations16 Aug 2021 Max Lennon, Nathan Drenkow, Philippe Burlina

To this end, several contributions are made here: A) we develop a new metric called mean Attack Success over Transformations (mAST) to evaluate patch attack robustness and invariance; and B), we systematically assess robustness of patch attacks to 3D position and orientation for various conditions; in particular, we conduct a sensitivity analysis which provides important qualitative insights into attack effectiveness as a function of the 3D pose of a patch relative to the camera (rotation, translation) and sets forth some properties for patch attack 3D invariance; and C), we draw novel qualitative conclusions including: 1) we demonstrate that for some 3D transformations, namely rotation and loom, increasing the training distribution support yields an increase in patch success over the full range at test time.

Addressing Visual Search in Open and Closed Set Settings

no code implementations11 Dec 2020 Nathan Drenkow, Philippe Burlina, Neil Fendley, Onyekachi Odoemene, Jared Markowitz

We interpret both detection problems through a probabilistic, Bayesian lens, whereby the objectness maps produced by our method serve as priors in a maximum-a-posteriori approach to the detection step.

object-detection Object Detection

Attack Agnostic Detection of Adversarial Examples via Random Subspace Analysis

no code implementations11 Dec 2020 Nathan Drenkow, Neil Fendley, Philippe Burlina

We present a technique that utilizes properties of random projections to characterize the behavior of clean and adversarial examples across a diverse set of subspaces.

Adversarial Attack Detection

Jacks of All Trades, Masters Of None: Addressing Distributional Shift and Obtrusiveness via Transparent Patch Attacks

no code implementations1 May 2020 Neil Fendley, Max Lennon, I-Jeng Wang, Philippe Burlina, Nathan Drenkow

We focus on the development of effective adversarial patch attacks and -- for the first time -- jointly address the antagonistic objectives of attack success and obtrusiveness via the design of novel semi-transparent patches.

Cannot find the paper you are looking for? You can Submit a new open access paper.