no code implementations • 9 Sep 2022 • Randolph Linderman, Jingyang Zhang, Nathan Inkawhich, Hai Li, Yiran Chen
Furthermore, we diagnose the classifiers performance at each level of the hierarchy improving the explainability and interpretability of the models predictions.
no code implementations • 23 Aug 2022 • Matthew Inkawhich, Nathan Inkawhich, Hai Li, Yiran Chen
Current state-of-the-art object proposal networks are trained with a closed-world assumption, meaning they learn to only detect objects of the training classes.
no code implementations • 2 Jul 2021 • Jerrick Liu, Nathan Inkawhich, Oliver Nina, Radu Timofte, Sahil Jain, Bob Lee, Yuru Duan, Wei Wei, Lei Zhang, Songzheng Xu, Yuxuan Sun, Jiaqi Tang, Mengru Ma, Gongzhe Li, Xueli Geng, Huanqia Cai, Chengxue Cai, Sol Cummings, Casian Miron, Alexandru Pasarica, Cheng-Yen Yang, Hung-Min Hsu, Jiarui Cai, Jie Mei, Chia-Ying Yeh, Jenq-Neng Hwang, Michael Xin, Zhongkai Shangguan, Zihe Zheng, Xu Yifei, Lehan Yang, Kele Xu, Min Feng
In this paper, we introduce the first Challenge on Multi-modal Aerial View Object Classification (MAVOC) in conjunction with the NTIRE 2021 workshop at CVPR.
1 code implementation • 7 Jun 2021 • Jingyang Zhang, Nathan Inkawhich, Randolph Linderman, Yiran Chen, Hai Li
We then propose Mixture Outlier Exposure (MixOE), which mixes ID data and training outliers to expand the coverage of different OOD granularities, and trains the model such that the prediction confidence linearly decays as the input transitions from ID to OOD.
no code implementations • 17 Mar 2021 • Nathan Inkawhich, Kevin J Liang, Jingyang Zhang, Huanrui Yang, Hai Li, Yiran Chen
During the online phase of the attack, we then leverage representations of highly related proxy classes from the whitebox distribution to fool the blackbox model into predicting the desired target class.
no code implementations • 17 Mar 2021 • Matthew Inkawhich, Nathan Inkawhich, Eric Davis, Hai Li, Yiran Chen
Over recent years, a myriad of novel convolutional network architectures have been developed to advance state-of-the-art performance on challenging recognition tasks.
3 code implementations • NeurIPS 2020 • Huanrui Yang, Jingyang Zhang, Hongliang Dong, Nathan Inkawhich, Andrew Gardner, Andrew Touchet, Wesley Wilkes, Heath Berry, Hai Li
The process is hard, often requires models with large capacity, and suffers from significant loss on clean data accuracy.
no code implementations • NeurIPS 2020 • Nathan Inkawhich, Kevin J Liang, Binghui Wang, Matthew Inkawhich, Lawrence Carin, Yiran Chen
We consider the blackbox transfer-based targeted adversarial attack threat model in the realm of deep neural network (DNN) image classifiers.
no code implementations • ICLR 2020 • Nathan Inkawhich, Kevin J Liang, Lawrence Carin, Yiran Chen
Almost all current adversarial attacks of CNN classifiers rely on information derived from the output layer of the network.
1 code implementation • CVPR 2019 • Nathan Inkawhich, Wei Wen, Hai (Helen) Li, Yiran Chen
Many recent works have shown that deep learning models are vulnerable to quasi-imperceptible input perturbations, yet practitioners cannot fully explain this behavior.
no code implementations • ICLR 2019 • Nathan Inkawhich, Matthew Inkawhich, Yiran Chen, Hai Li
The success of deep learning research has catapulted deep models into production systems that our society is becoming increasingly dependent on, especially in the image and video domains.