no code implementations • 28 May 2024 • Yingwen Wu, Ruiji Yu, Xinwen Cheng, Zhengbao He, Xiaolin Huang
In the open world, detecting out-of-distribution (OOD) data, whose labels are disjoint with those of in-distribution (ID) samples, is important for reliable deep neural networks (DNNs).
1 code implementation • 30 Mar 2024 • Tao Li, Qinghua Tao, Weihao Yan, Zehao Lei, Yingwen Wu, Kun Fang, Mingzhen He, Xiaolin Huang
Improving the generalization ability of modern deep neural networks (DNNs) is a fundamental challenge in machine learning.
no code implementations • 23 Feb 2024 • Xinwen Cheng, Zhehao Huang, WenXin Zhou, Zhengbao He, Ruikai Yang, Yingwen Wu, Xiaolin Huang
We first theoretically discover that sample's contribution during the process will reflect in the learned model's sensitivity to it.
1 code implementation • 11 Nov 2023 • Zhehao Huang, Tao Li, Chenhe Yuan, Yingwen Wu, Xiaolin Huang
Online continual learning is a challenging problem where models must learn from a non-stationary data stream while avoiding catastrophic forgetting.
no code implementations • 26 Oct 2023 • Yingwen Wu, Tao Li, Xinwen Cheng, Jie Yang, Xiaolin Huang
To bridge this gap, in this paper, we conduct a comprehensive investigation into leveraging the entirety of gradient information for OOD detection.
1 code implementation • 21 Nov 2022 • Tao Li, Weihao Yan, Zehao Lei, Yingwen Wu, Kun Fang, Ming Yang, Xiaolin Huang
To fully uncover the great potential of deep neural networks (DNNs), various learning algorithms have been developed to improve the model's generalization ability.
1 code implementation • 20 Nov 2022 • Kun Fang, Qinghua Tao, Yingwen Wu, Tao Li, Xiaolin Huang, Jie Yang
Randomized Smoothing (RS) is a promising technique for certified robustness, and recently in RS the ensemble of multiple deep neural networks (DNNs) has shown state-of-the-art performances.
1 code implementation • 12 Aug 2022 • Yingwen Wu, Sizhe Chen, Kun Fang, Xiaolin Huang
The wide application of deep neural networks (DNNs) demands an increasing amount of attention to their real-world robustness, i. e., whether a DNN resists black-box adversarial attacks, among which score-based query attacks (SQAs) are most threatening since they can effectively hurt a victim network with the only access to model outputs.
1 code implementation • 26 May 2022 • Tao Li, Zhehao Huang, Yingwen Wu, Zhengbao He, Qinghua Tao, Xiaolin Huang, Chih-Jen Lin
Training deep neural networks (DNNs) in low-dimensional subspaces is a promising direction for achieving efficient training and better generalization performance.
1 code implementation • 24 May 2022 • Sizhe Chen, Zhehao Huang, Qinghua Tao, Yingwen Wu, Cihang Xie, Xiaolin Huang
The score-based query attacks (SQAs) pose practical threats to deep neural networks by crafting adversarial perturbations within dozens of queries, only using the model's output scores.
1 code implementation • CVPR 2022 • Tao Li, Yingwen Wu, Sizhe Chen, Kun Fang, Xiaolin Huang
Single-step adversarial training (AT) has received wide attention as it proved to be both efficient and robust.
2 code implementations • 23 Oct 2020 • Kun Fang, Qinghua Tao, Yingwen Wu, Tao Li, Jia Cai, Feipeng Cai, Xiaolin Huang, Jie Yang
In this way, the proposed DIO augments the model and enhances the robustness of DNN itself as the learned features can be corrected by these mutually-orthogonal paths.
no code implementations • 28 Sep 2020 • Kun Fang, Xiaolin Huang, Yingwen Wu, Tao Li, Jie Yang
To defend adversarial attacks, we design a block containing multiple paths to learn robust features and the parameters of these paths are required to be orthogonal with each other.