Search Results for author: Nathan Inkawhich

Found 11 papers, 3 papers with code

Fine-grain Inference on Out-of-Distribution Data with Hierarchical Classification

no code implementations9 Sep 2022 Randolph Linderman, Jingyang Zhang, Nathan Inkawhich, Hai Li, Yiran Chen

Furthermore, we diagnose the classifiers performance at each level of the hierarchy improving the explainability and interpretability of the models predictions.

Anomaly Detection OOD Detection

Self-Trained Proposal Networks for the Open World

no code implementations23 Aug 2022 Matthew Inkawhich, Nathan Inkawhich, Hai Li, Yiran Chen

Current state-of-the-art object proposal networks are trained with a closed-world assumption, meaning they learn to only detect objects of the training classes.

object-detection Object Detection +1

Mixture Outlier Exposure: Towards Out-of-Distribution Detection in Fine-grained Environments

1 code implementation7 Jun 2021 Jingyang Zhang, Nathan Inkawhich, Randolph Linderman, Yiran Chen, Hai Li

We then propose Mixture Outlier Exposure (MixOE), which mixes ID data and training outliers to expand the coverage of different OOD granularities, and trains the model such that the prediction confidence linearly decays as the input transitions from ID to OOD.

Medical Image Classification OOD Detection +1

Can Targeted Adversarial Examples Transfer When the Source and Target Models Have No Label Space Overlap?

no code implementations17 Mar 2021 Nathan Inkawhich, Kevin J Liang, Jingyang Zhang, Huanrui Yang, Hai Li, Yiran Chen

During the online phase of the attack, we then leverage representations of highly related proxy classes from the whitebox distribution to fool the blackbox model into predicting the desired target class.

The Untapped Potential of Off-the-Shelf Convolutional Neural Networks

no code implementations17 Mar 2021 Matthew Inkawhich, Nathan Inkawhich, Eric Davis, Hai Li, Yiran Chen

Over recent years, a myriad of novel convolutional network architectures have been developed to advance state-of-the-art performance on challenging recognition tasks.

Neural Architecture Search

Transferable Perturbations of Deep Feature Distributions

no code implementations ICLR 2020 Nathan Inkawhich, Kevin J Liang, Lawrence Carin, Yiran Chen

Almost all current adversarial attacks of CNN classifiers rely on information derived from the output layer of the network.

Adversarial Attack

Feature Space Perturbations Yield More Transferable Adversarial Examples

1 code implementation CVPR 2019 Nathan Inkawhich, Wei Wen, Hai (Helen) Li, Yiran Chen

Many recent works have shown that deep learning models are vulnerable to quasi-imperceptible input perturbations, yet practitioners cannot fully explain this behavior.

Adversarial Attack

Adversarial Attacks for Optical Flow-Based Action Recognition Classifiers

no code implementations ICLR 2019 Nathan Inkawhich, Matthew Inkawhich, Yiran Chen, Hai Li

The success of deep learning research has catapulted deep models into production systems that our society is becoming increasingly dependent on, especially in the image and video domains.

Action Recognition Adversarial Attack +2

Cannot find the paper you are looking for? You can Submit a new open access paper.