Search Results for author: Ziquan Liu

Found 12 papers, 4 papers with code

Borrowing Treasures from Neighbors: In-Context Learning for Multimodal Learning with Missing Modalities and Data Scarcity

1 code implementation14 Mar 2024 Zhuo Zhi, Ziquan Liu, Moe Elbadawi, Adam Daneshmend, Mine Orlu, Abdul Basit, Andreas Demosthenous, Miguel Rodrigues

The proposed data-dependent framework exhibits a higher degree of sample efficiency and is empirically demonstrated to enhance the classification model's performance on both full- and missing-modality data in the low-data regime across various multimodal learning tasks.

In-Context Learning

PROSAC: Provably Safe Certification for Machine Learning Models under Adversarial Attacks

no code implementations4 Feb 2024 Ziquan Liu, Zhuo Zhi, Ilija Bogunovic, Carsten Gerner-Beuerle, Miguel Rodrigues

Our paper offers a new approach to certify the performance of machine learning models in the presence of adversarial attacks with population level risk guarantees.

Adversarial Attack Bayesian Optimization

Cultural Alignment in Large Language Models: An Explanatory Analysis Based on Hofstede's Cultural Dimensions

no code implementations25 Aug 2023 Reem I. Masoud, Ziquan Liu, Martin Ferianc, Philip Treleaven, Miguel Rodrigues

The deployment of large language models (LLMs) raises concerns regarding their cultural misalignment and potential ramifications on individuals from various cultural norms.

DropMAE: Masked Autoencoders with Spatial-Attention Dropout for Tracking Tasks

1 code implementation CVPR 2023 Qiangqiang Wu, Tianyu Yang, Ziquan Liu, Baoyuan Wu, Ying Shan, Antoni B. Chan

However, we find that this simple baseline heavily relies on spatial cues while ignoring temporal relations for frame reconstruction, thus leading to sub-optimal temporal matching representations for VOT and VOS.

 Ranked #1 on Visual Object Tracking on TrackingNet (AUC metric)

Semantic Segmentation Video Object Segmentation +2

TWINS: A Fine-Tuning Framework for Improved Transferability of Adversarial Robustness and Generalization

1 code implementation CVPR 2023 Ziquan Liu, Yi Xu, Xiangyang Ji, Antoni B. Chan

To better exploit the potential of pre-trained models in adversarial robustness, this paper focuses on the fine-tuning of an adversarially pre-trained model in various classification tasks.

Adversarial Robustness Image Classification

Boosting Adversarial Robustness From The Perspective of Effective Margin Regularization

no code implementations11 Oct 2022 Ziquan Liu, Antoni B. Chan

Our empirical study on feedforward DNNs demonstrates that the proposed effective margin regularization (EMR) learns large effective margins and boosts the adversarial robustness in both standard and adversarial training.

Adversarial Defense Adversarial Robustness

An Empirical Study on Distribution Shift Robustness From the Perspective of Pre-Training and Data Augmentation

no code implementations25 May 2022 Ziquan Liu, Yi Xu, Yuanhong Xu, Qi Qian, Hao Li, Rong Jin, Xiangyang Ji, Antoni B. Chan

With our empirical result obtained from 1, 330 models, we provide the following main observations: 1) ERM combined with data augmentation can achieve state-of-the-art performance if we choose a proper pre-trained model respecting the data property; 2) specialized algorithms further improve the robustness on top of ERM when handling a specific type of distribution shift, e. g., GroupDRO for spurious correlation and CORAL for large-scale out-of-distribution data; 3) Comparing different pre-training modes, architectures and data sizes, we provide novel observations about pre-training on distribution shift, which sheds light on designing or selecting pre-training strategy for different kinds of distribution shifts.

Data Augmentation

Improved Fine-Tuning by Better Leveraging Pre-Training Data

no code implementations24 Nov 2021 Ziquan Liu, Yi Xu, Yuanhong Xu, Qi Qian, Hao Li, Xiangyang Ji, Antoni Chan, Rong Jin

The generalization result of using pre-training data shows that the excess risk bound on a target task can be improved when the appropriate pre-training data is included in fine-tuning.

Image Classification Learning Theory

A Generalized Loss Function for Crowd Counting and Localization

no code implementations CVPR 2021 Jia Wan, Ziquan Liu, Antoni B. Chan

In this paper, we investigate learning the density map representation through an unbalanced optimal transport problem, and propose a generalized loss function to learn density maps for crowd counting and localization.

Crowd Counting

Weight Rescaling: Effective and Robust Regularization for Deep Neural Networks with Batch Normalization

no code implementations6 Feb 2021 Ziquan Liu, Yufei Cui, Jia Wan, Yu Mao, Antoni B. Chan

On the one hand, when the non-adaptive learning rate e. g. SGD with momentum is used, the effective learning rate continues to increase even after the initial training stage, which leads to an overfitting effect in many neural architectures.

Crowd Counting Image Classification +3

Variational Nested Dropout

1 code implementation CVPR 2021 Yufei Cui, Yu Mao, Ziquan Liu, Qiao Li, Antoni B. Chan, Xue Liu, Tei-Wei Kuo, Chun Jason Xue

Nested dropout is a variant of dropout operation that is able to order network parameters or features based on the pre-defined importance during training.

Representation Learning

Improve Generalization and Robustness of Neural Networks via Weight Scale Shifting Invariant Regularizations

no code implementations ICML Workshop AML 2021 Ziquan Liu, Yufei Cui, Antoni B. Chan

The derived regularizer is an upper bound for the input gradient of the network so minimizing the improved regularizer also benefits the adversarial robustness.

Adversarial Robustness

Cannot find the paper you are looking for? You can Submit a new open access paper.