no code implementations • 28 Nov 2022 • Nathan Drenkow, Alvin Tan, Chace Ashcraft, Kiran Karra
The deployment of machine learning models in safety-critical applications comes with the expectation that such models will perform well over a range of contexts (e. g., a vision model for classifying street signs should work in rural, city, and highway settings under varying lighting/weather conditions).
no code implementations • 1 Dec 2021 • Nathan Drenkow, Numair Sani, Ilya Shpitser, Mathias Unberath
We find this area of research has received disproportionately less attention relative to adversarial machine learning, yet a significant robustness gap exists that manifests in performance degradation similar in magnitude to adversarial conditions.
no code implementations • 13 Sep 2021 • Zhaoshuo Li, Nathan Drenkow, Hao Ding, Andy S. Ding, Alexander Lu, Francis X. Creighton, Russell H. Taylor, Mathias Unberath
It is based on the idea that observed frames can be synthesized from neighboring frames if accurate depth of the scene is known - or in this case, estimated.
no code implementations • 16 Aug 2021 • Max Lennon, Nathan Drenkow, Philippe Burlina
To this end, several contributions are made here: A) we develop a new metric called mean Attack Success over Transformations (mAST) to evaluate patch attack robustness and invariance; and B), we systematically assess robustness of patch attacks to 3D position and orientation for various conditions; in particular, we conduct a sensitivity analysis which provides important qualitative insights into attack effectiveness as a function of the 3D pose of a patch relative to the camera (rotation, translation) and sets forth some properties for patch attack 3D invariance; and C), we draw novel qualitative conclusions including: 1) we demonstrate that for some 3D transformations, namely rotation and loom, increasing the training distribution support yields an increase in patch success over the full range at test time.
no code implementations • 11 Dec 2020 • Nathan Drenkow, Philippe Burlina, Neil Fendley, Onyekachi Odoemene, Jared Markowitz
We interpret both detection problems through a probabilistic, Bayesian lens, whereby the objectness maps produced by our method serve as priors in a maximum-a-posteriori approach to the detection step.
no code implementations • 11 Dec 2020 • Nathan Drenkow, Neil Fendley, Philippe Burlina
We present a technique that utilizes properties of random projections to characterize the behavior of clean and adversarial examples across a diverse set of subspaces.
1 code implementation • ICCV 2021 • Zhaoshuo Li, Xingtong Liu, Nathan Drenkow, Andy Ding, Francis X. Creighton, Russell H. Taylor, Mathias Unberath
Stereo depth estimation relies on optimal correspondence matching between pixels on epipolar lines in the left and right images to infer depth.
no code implementations • 1 May 2020 • Neil Fendley, Max Lennon, I-Jeng Wang, Philippe Burlina, Nathan Drenkow
We focus on the development of effective adversarial patch attacks and -- for the first time -- jointly address the antagonistic objectives of attack success and obtrusiveness via the design of novel semi-transparent patches.