Search Results for author: Haoqi Wang

Found 6 papers, 5 papers with code

Get the Best of Both Worlds: Improving Accuracy and Transferability by Grassmann Class Representation

1 code implementation ICCV 2023 Haoqi Wang, Zhizhong Li, Wayne Zhang

We generalize the class vectors found in neural networks to linear subspaces (i. e.~points in the Grassmann manifold) and show that the Grassmann Class Representation (GCR) enables the simultaneous improvement in accuracy and feature transferability.

OpenOOD: Benchmarking Generalized Out-of-Distribution Detection

3 code implementations13 Oct 2022 Jingkang Yang, Pengyun Wang, Dejian Zou, Zitang Zhou, Kunyuan Ding, Wenxuan Peng, Haoqi Wang, Guangyao Chen, Bo Li, Yiyou Sun, Xuefeng Du, Kaiyang Zhou, Wayne Zhang, Dan Hendrycks, Yixuan Li, Ziwei Liu

Out-of-distribution (OOD) detection is vital to safety-critical machine learning applications and has thus been extensively studied, with a plethora of methods developed in the literature.

Anomaly Detection Benchmarking +3

ViM: Out-Of-Distribution with Virtual-logit Matching

2 code implementations CVPR 2022 Haoqi Wang, Zhizhong Li, Litong Feng, Wayne Zhang

Most of the existing Out-Of-Distribution (OOD) detection algorithms depend on single input source: the feature, the logit, or the softmax probability.

Out-of-Distribution Detection

Semantically Coherent Out-of-Distribution Detection

2 code implementations ICCV 2021 Jingkang Yang, Haoqi Wang, Litong Feng, Xiaopeng Yan, Huabin Zheng, Wayne Zhang, Ziwei Liu

The proposed UDG can not only enrich the semantic knowledge of the model by exploiting unlabeled data in an unsupervised manner, but also distinguish ID/OOD samples to enhance ID classification and OOD detection tasks simultaneously.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

Detect and remove watermark in deep neural networks via generative adversarial networks

no code implementations15 Jun 2021 Haoqi Wang, Mingfu Xue, Shichang Sun, Yushu Zhang, Jian Wang, Weiqiang Liu

Experimental evaluations on the MNIST and CIFAR10 datasets demonstrate that, the proposed method can effectively remove about 98% of the watermark in DNN models, as the watermark retention rate reduces from 100% to less than 2% after applying the proposed attack.

Cannot find the paper you are looking for? You can Submit a new open access paper.