Search Results for author: Ziquan Liu

Found 7 papers, 1 papers with code

Boosting Adversarial Robustness From The Perspective of Effective Margin Regularization

no code implementations11 Oct 2022 Ziquan Liu, Antoni B. Chan

Our empirical study on feedforward DNNs demonstrates that the proposed effective margin regularization (EMR) learns large effective margins and boosts the adversarial robustness in both standard and adversarial training.

Adversarial Defense Adversarial Robustness

An Empirical Study on Distribution Shift Robustness From the Perspective of Pre-Training and Data Augmentation

no code implementations25 May 2022 Ziquan Liu, Yi Xu, Yuanhong Xu, Qi Qian, Hao Li, Rong Jin, Xiangyang Ji, Antoni B. Chan

With our empirical result obtained from 1, 330 models, we provide the following main observations: 1) ERM combined with data augmentation can achieve state-of-the-art performance if we choose a proper pre-trained model respecting the data property; 2) specialized algorithms further improve the robustness on top of ERM when handling a specific type of distribution shift, e. g., GroupDRO for spurious correlation and CORAL for large-scale out-of-distribution data; 3) Comparing different pre-training modes, architectures and data sizes, we provide novel observations about pre-training on distribution shift, which sheds light on designing or selecting pre-training strategy for different kinds of distribution shifts.

Data Augmentation

Improved Fine-Tuning by Better Leveraging Pre-Training Data

no code implementations24 Nov 2021 Ziquan Liu, Yi Xu, Yuanhong Xu, Qi Qian, Hao Li, Xiangyang Ji, Antoni Chan, Rong Jin

The generalization result of using pre-training data shows that the excess risk bound on a target task can be improved when the appropriate pre-training data is included in fine-tuning.

Image Classification Learning Theory

A Generalized Loss Function for Crowd Counting and Localization

no code implementations CVPR 2021 Jia Wan, Ziquan Liu, Antoni B. Chan

In this paper, we investigate learning the density map representation through an unbalanced optimal transport problem, and propose a generalized loss function to learn density maps for crowd counting and localization.

Crowd Counting

Weight Rescaling: Effective and Robust Regularization for Deep Neural Networks with Batch Normalization

no code implementations6 Feb 2021 Ziquan Liu, Yufei Cui, Jia Wan, Yu Mao, Antoni B. Chan

On the one hand, when the non-adaptive learning rate e. g. SGD with momentum is used, the effective learning rate continues to increase even after the initial training stage, which leads to an overfitting effect in many neural architectures.

Crowd Counting Image Classification +3

Variational Nested Dropout

1 code implementation CVPR 2021 Yufei Cui, Yu Mao, Ziquan Liu, Qiao Li, Antoni B. Chan, Xue Liu, Tei-Wei Kuo, Chun Jason Xue

Nested dropout is a variant of dropout operation that is able to order network parameters or features based on the pre-defined importance during training.

Representation Learning

Improve Generalization and Robustness of Neural Networks via Weight Scale Shifting Invariant Regularizations

no code implementations ICML Workshop AML 2021 Ziquan Liu, Yufei Cui, Antoni B. Chan

The derived regularizer is an upper bound for the input gradient of the network so minimizing the improved regularizer also benefits the adversarial robustness.

Adversarial Robustness

Cannot find the paper you are looking for? You can Submit a new open access paper.