Search Results for author: Nathan Inkawhich

Found 19 papers, 4 papers with code

Adversarial Attacks for Optical Flow-Based Action Recognition Classifiers

no code implementations ICLR 2019 Nathan Inkawhich, Matthew Inkawhich, Yiran Chen, Hai Li

The success of deep learning research has catapulted deep models into production systems that our society is becoming increasingly dependent on, especially in the image and video domains.

Action Recognition Adversarial Attack +3

Feature Space Perturbations Yield More Transferable Adversarial Examples

1 code implementation CVPR 2019 Nathan Inkawhich, Wei Wen, Hai (Helen) Li, Yiran Chen

Many recent works have shown that deep learning models are vulnerable to quasi-imperceptible input perturbations, yet practitioners cannot fully explain this behavior.

Adversarial Attack

Transferable Perturbations of Deep Feature Distributions

no code implementations ICLR 2020 Nathan Inkawhich, Kevin J Liang, Lawrence Carin, Yiran Chen

Almost all current adversarial attacks of CNN classifiers rely on information derived from the output layer of the network.

Adversarial Attack

Can Targeted Adversarial Examples Transfer When the Source and Target Models Have No Label Space Overlap?

no code implementations17 Mar 2021 Nathan Inkawhich, Kevin J Liang, Jingyang Zhang, Huanrui Yang, Hai Li, Yiran Chen

During the online phase of the attack, we then leverage representations of highly related proxy classes from the whitebox distribution to fool the blackbox model into predicting the desired target class.

The Untapped Potential of Off-the-Shelf Convolutional Neural Networks

no code implementations17 Mar 2021 Matthew Inkawhich, Nathan Inkawhich, Eric Davis, Hai Li, Yiran Chen

Over recent years, a myriad of novel convolutional network architectures have been developed to advance state-of-the-art performance on challenging recognition tasks.

Neural Architecture Search

Mixture Outlier Exposure: Towards Out-of-Distribution Detection in Fine-grained Environments

1 code implementation7 Jun 2021 Jingyang Zhang, Nathan Inkawhich, Randolph Linderman, Yiran Chen, Hai Li

We then propose Mixture Outlier Exposure (MixOE), which mixes ID data and training outliers to expand the coverage of different OOD granularities, and trains the model such that the prediction confidence linearly decays as the input transitions from ID to OOD.

Medical Image Classification Out-of-Distribution Detection +1

Tunable Hybrid Proposal Networks for the Open World

no code implementations23 Aug 2022 Matthew Inkawhich, Nathan Inkawhich, Hai Li, Yiran Chen

Current state-of-the-art object proposal networks are trained with a closed-world assumption, meaning they learn to only detect objects of the training classes.

object-detection Object Detection +1

Fine-grain Inference on Out-of-Distribution Data with Hierarchical Classification

no code implementations9 Sep 2022 Randolph Linderman, Jingyang Zhang, Nathan Inkawhich, Hai Li, Yiran Chen

Furthermore, we diagnose the classifiers performance at each level of the hierarchy improving the explainability and interpretability of the models predictions.

Anomaly Detection Out of Distribution (OOD) Detection

A Global Model Approach to Robust Few-Shot SAR Automatic Target Recognition

no code implementations20 Mar 2023 Nathan Inkawhich

In the first, a global representation model is trained via self-supervised learning on a large pool of diverse and unlabeled SAR data.

Meta-Learning Out-of-Distribution Detection +1

SIO: Synthetic In-Distribution Data Benefits Out-of-Distribution Detection

1 code implementation25 Mar 2023 Jingyang Zhang, Nathan Inkawhich, Randolph Linderman, Ryan Luley, Yiran Chen, Hai Li

Building up reliable Out-of-Distribution (OOD) detectors is challenging, often requiring the use of OOD data during training.

Out-of-Distribution Detection

Establishing baselines and introducing TernaryMixOE for fine-grained out-of-distribution detection

no code implementations30 Mar 2023 Noah Fleischmann, Walter Bennette, Nathan Inkawhich

Machine learning models deployed in the open world may encounter observations that they were not trained to recognize, and they risk misclassifying such observations with high confidence.

Out-of-Distribution Detection

Adversarial Attacks on Foundational Vision Models

no code implementations28 Aug 2023 Nathan Inkawhich, Gwendolyn McDonald, Ryan Luley

We show our attacks to be potent in whitebox and blackbox settings, as well as when transferred across foundational model types (e. g., attack DINOv2 with CLIP)!

Comprehensive OOD Detection Improvements

no code implementations18 Jan 2024 Anish Lakkapragada, Amol Khanna, Edward Raff, Nathan Inkawhich

As machine learning becomes increasingly prevalent in impactful decisions, recognizing when inference data is outside the model's expected input distribution is paramount for giving context to predictions.

Dimensionality Reduction Out of Distribution (OOD) Detection

Out-of-Distribution Detection via Deep Multi-Comprehension Ensemble

no code implementations24 Mar 2024 Chenhui Xu, Fuxun Yu, Zirui Xu, Nathan Inkawhich, Xiang Chen

Our experimental results demonstrate the superior performance of the MC Ensemble strategy in OOD detection compared to both the naive Deep Ensemble method and a standalone model of comparable size.

Out-of-Distribution Detection

SoK: A Review of Differentially Private Linear Models For High-Dimensional Data

no code implementations1 Apr 2024 Amol Khanna, Edward Raff, Nathan Inkawhich

Linear models are ubiquitous in data science, but are particularly prone to overfitting and data memorization in high dimensions.

Memorization

Cannot find the paper you are looking for? You can Submit a new open access paper.