Search Results for author: Nathan Drenkow

Found 12 papers, 1 papers with code

From Generalization to Precision: Exploring SAM for Tool Segmentation in Surgical Environments

no code implementations28 Feb 2024 Kanyifeechukwu J. Oguine, Roger D. Soberanis-Mukul, Nathan Drenkow, Mathias Unberath

We argue that SAM drastically over-segment images with high corruption levels, resulting in degraded performance when only a single segmentation mask is considered, while the combination of the masks overlapping the object of interest generates an accurate prediction.

Segmentation Zero Shot Segmentation

RobustCLEVR: A Benchmark and Framework for Evaluating Robustness in Object-centric Learning

no code implementations28 Aug 2023 Nathan Drenkow, Mathias Unberath

Lastly, while conventional robustness evaluations view corruptions as out-of-distribution, we use our causal framework to show that even training on in-distribution image corruptions does not guarantee increased model robustness.

Image Generation Object +1

Data AUDIT: Identifying Attribute Utility- and Detectability-Induced Bias in Task Models

no code implementations6 Apr 2023 Mitchell Pavlak, Nathan Drenkow, Nicholas Petrick, Mohammad Mehdi Farhangi, Mathias Unberath

To safely deploy deep learning-based computer vision models for computer-aided detection and diagnosis, we must ensure that they are robust and reliable.

Attribute Causal Inference +1

Context-Adaptive Deep Neural Networks via Bridge-Mode Connectivity

no code implementations28 Nov 2022 Nathan Drenkow, Alvin Tan, Chace Ashcraft, Kiran Karra

The deployment of machine learning models in safety-critical applications comes with the expectation that such models will perform well over a range of contexts (e. g., a vision model for classifying street signs should work in rural, city, and highway settings under varying lighting/weather conditions).

Image Classification

A Systematic Review of Robustness in Deep Learning for Computer Vision: Mind the gap?

no code implementations1 Dec 2021 Nathan Drenkow, Numair Sani, Ilya Shpitser, Mathias Unberath

We find this area of research has received disproportionately less attention relative to adversarial machine learning, yet a significant robustness gap exists that manifests in performance degradation similar in magnitude to adversarial conditions.

Adversarial Robustness Data Augmentation +1

On the Sins of Image Synthesis Loss for Self-supervised Depth Estimation

no code implementations13 Sep 2021 Zhaoshuo Li, Nathan Drenkow, Hao Ding, Andy S. Ding, Alexander Lu, Francis X. Creighton, Russell H. Taylor, Mathias Unberath

It is based on the idea that observed frames can be synthesized from neighboring frames if accurate depth of the scene is known - or in this case, estimated.

Attribute Depth Estimation +3

Patch Attack Invariance: How Sensitive are Patch Attacks to 3D Pose?

no code implementations16 Aug 2021 Max Lennon, Nathan Drenkow, Philippe Burlina

To this end, several contributions are made here: A) we develop a new metric called mean Attack Success over Transformations (mAST) to evaluate patch attack robustness and invariance; and B), we systematically assess robustness of patch attacks to 3D position and orientation for various conditions; in particular, we conduct a sensitivity analysis which provides important qualitative insights into attack effectiveness as a function of the 3D pose of a patch relative to the camera (rotation, translation) and sets forth some properties for patch attack 3D invariance; and C), we draw novel qualitative conclusions including: 1) we demonstrate that for some 3D transformations, namely rotation and loom, increasing the training distribution support yields an increase in patch success over the full range at test time.

Attack Agnostic Detection of Adversarial Examples via Random Subspace Analysis

no code implementations11 Dec 2020 Nathan Drenkow, Neil Fendley, Philippe Burlina

We present a technique that utilizes properties of random projections to characterize the behavior of clean and adversarial examples across a diverse set of subspaces.

Adversarial Attack Detection

Addressing Visual Search in Open and Closed Set Settings

no code implementations11 Dec 2020 Nathan Drenkow, Philippe Burlina, Neil Fendley, Onyekachi Odoemene, Jared Markowitz

We interpret both detection problems through a probabilistic, Bayesian lens, whereby the objectness maps produced by our method serve as priors in a maximum-a-posteriori approach to the detection step.

Object object-detection +1

Jacks of All Trades, Masters Of None: Addressing Distributional Shift and Obtrusiveness via Transparent Patch Attacks

no code implementations1 May 2020 Neil Fendley, Max Lennon, I-Jeng Wang, Philippe Burlina, Nathan Drenkow

We focus on the development of effective adversarial patch attacks and -- for the first time -- jointly address the antagonistic objectives of attack success and obtrusiveness via the design of novel semi-transparent patches.

Cannot find the paper you are looking for? You can Submit a new open access paper.