Search Results for author: Yingwen Wu

Found 11 papers, 9 papers with code

Revisiting Random Weight Perturbation for Efficiently Improving Generalization

1 code implementation30 Mar 2024 Tao Li, Qinghua Tao, Weihao Yan, Zehao Lei, Yingwen Wu, Kun Fang, Mingzhen He, Xiaolin Huang

Improving the generalization ability of modern deep neural networks (DNNs) is a fundamental challenge in machine learning.

Online Continual Learning via Logit Adjusted Softmax

1 code implementation11 Nov 2023 Zhehao Huang, Tao Li, Chenhe Yuan, Yingwen Wu, Xiaolin Huang

Online continual learning is a challenging problem where models must learn from a non-stationary data stream while avoiding catastrophic forgetting.

Continual Learning

Low-Dimensional Gradient Helps Out-of-Distribution Detection

no code implementations26 Oct 2023 Yingwen Wu, Tao Li, Xinwen Cheng, Jie Yang, Xiaolin Huang

To bridge this gap, in this paper, we conduct a comprehensive investigation into leveraging the entirety of gradient information for OOD detection.

Dimensionality Reduction Out-of-Distribution Detection

Efficient Generalization Improvement Guided by Random Weight Perturbation

1 code implementation21 Nov 2022 Tao Li, Weihao Yan, Zehao Lei, Yingwen Wu, Kun Fang, Ming Yang, Xiaolin Huang

To fully uncover the great potential of deep neural networks (DNNs), various learning algorithms have been developed to improve the model's generalization ability.

On Multi-head Ensemble of Smoothed Classifiers for Certified Robustness

1 code implementation20 Nov 2022 Kun Fang, Qinghua Tao, Yingwen Wu, Tao Li, Xiaolin Huang, Jie Yang

Randomized Smoothing (RS) is a promising technique for certified robustness, and recently in RS the ensemble of multiple deep neural networks (DNNs) has shown state-of-the-art performances.

Unifying Gradients to Improve Real-world Robustness for Deep Networks

1 code implementation12 Aug 2022 Yingwen Wu, Sizhe Chen, Kun Fang, Xiaolin Huang

The wide application of deep neural networks (DNNs) demands an increasing amount of attention to their real-world robustness, i. e., whether a DNN resists black-box adversarial attacks, among which score-based query attacks (SQAs) are most threatening since they can effectively hurt a victim network with the only access to model outputs.

Trainable Weight Averaging: A General Approach for Subspace Training

1 code implementation26 May 2022 Tao Li, Zhehao Huang, Yingwen Wu, Zhengbao He, Qinghua Tao, Xiaolin Huang, Chih-Jen Lin

Training deep neural networks (DNNs) in low-dimensional subspaces is a promising direction for achieving efficient training and better generalization performance.

Dimensionality Reduction Efficient Neural Network +3

Adversarial Attack on Attackers: Post-Process to Mitigate Black-Box Score-Based Query Attacks

1 code implementation24 May 2022 Sizhe Chen, Zhehao Huang, Qinghua Tao, Yingwen Wu, Cihang Xie, Xiaolin Huang

The score-based query attacks (SQAs) pose practical threats to deep neural networks by crafting adversarial perturbations within dozens of queries, only using the model's output scores.

Adversarial Attack

Subspace Adversarial Training

1 code implementation CVPR 2022 Tao Li, Yingwen Wu, Sizhe Chen, Kun Fang, Xiaolin Huang

Single-step adversarial training (AT) has received wide attention as it proved to be both efficient and robust.

Towards Robust Neural Networks via Orthogonal Diversity

2 code implementations23 Oct 2020 Kun Fang, Qinghua Tao, Yingwen Wu, Tao Li, Jia Cai, Feipeng Cai, Xiaolin Huang, Jie Yang

In this way, the proposed DIO augments the model and enhances the robustness of DNN itself as the learned features can be corrected by these mutually-orthogonal paths.

Adversarial Robustness Data Augmentation

Learn Robust Features via Orthogonal Multi-Path

no code implementations28 Sep 2020 Kun Fang, Xiaolin Huang, Yingwen Wu, Tao Li, Jie Yang

To defend adversarial attacks, we design a block containing multiple paths to learn robust features and the parameters of these paths are required to be orthogonal with each other.

Cannot find the paper you are looking for? You can Submit a new open access paper.