Search Results for author: Wanqi Zhou

Found 6 papers, 6 papers with code

PromptTA: Prompt-driven Text Adapter for Source-free Domain Generalization

1 code implementation21 Sep 2024 Haoran Zhang, Shuanghao Bai, Wanqi Zhou, Jingwen Fu, Badong Chen

In this work, we propose Prompt-Driven Text Adapter (PromptTA) method, which is designed to better capture the distribution of style features and employ resampling to ensure thorough coverage of domain knowledge.

Source-free Domain Generalization

Jacobian Regularizer-based Neural Granger Causality

1 code implementation14 May 2024 Wanqi Zhou, Shuanghao Bai, Shujian Yu, Qibin Zhao, Badong Chen

With the advancement of neural networks, diverse methods for neural Granger causality have emerged, which demonstrate proficiency in handling complex data, and nonlinear relationships.

Revisiting the Adversarial Robustness of Vision Language Models: a Multimodal Perspective

1 code implementation30 Apr 2024 Wanqi Zhou, Shuanghao Bai, Danilo P. Mandic, Qibin Zhao, Badong Chen

To this end, this work presents the first comprehensive study on improving the adversarial robustness of VLMs against attacks targeting image, text, and multimodal inputs.

Adversarial Defense Adversarial Robustness +1

Soft Prompt Generation for Domain Generalization

1 code implementation30 Apr 2024 Shuanghao Bai, Yuedi Zhang, Wanqi Zhou, Zhirong Luan, Badong Chen

During the inference phase, the generator of the generative model is employed to obtain instance-specific soft prompts for the unseen target domain.

Diversity Domain Generalization

Improving Cross-domain Few-shot Classification with Multilayer Perceptron

1 code implementation15 Dec 2023 Shuanghao Bai, Wanqi Zhou, Zhirong Luan, Donglin Wang, Badong Chen

Multilayer perceptron (MLP) has shown its capability to learn transferable representations in various downstream tasks, such as unsupervised image classification and supervised concept generalization.

Classification Cross-Domain Few-Shot +1

Prompt-based Distribution Alignment for Unsupervised Domain Adaptation

1 code implementation15 Dec 2023 Shuanghao Bai, Min Zhang, Wanqi Zhou, Siteng Huang, Zhirong Luan, Donglin Wang, Badong Chen

Therefore, in this paper, we first experimentally demonstrate that the unsupervised-trained VLMs can significantly reduce the distribution discrepancy between source and target domains, thereby improving the performance of UDA.

Prompt Engineering Unsupervised Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.