Search Results for author: Xiaoyue Mi

Found 6 papers, 3 papers with code

U-VAP: User-specified Visual Appearance Personalization via Decoupled Self Augmentation

1 code implementation29 Mar 2024 You Wu, Kean Liu, Xiaoyue Mi, Fan Tang, Juan Cao, Jintao Li

Extensive experiments on various kinds of visual attributes with SOTA personalization methods show the ability of the proposed method to mimic target visual appearance in novel contexts, thus improving the controllability and flexibility of personalization.

Attribute Disentanglement +1

Adversarial Robust Memory-Based Continual Learner

no code implementations29 Nov 2023 Xiaoyue Mi, Fan Tang, Zonghan Yang, Danding Wang, Juan Cao, Peng Li, Yang Liu

Despite the remarkable advances that have been made in continual learning, the adversarial vulnerability of such methods has not been fully discussed.

Adversarial Robustness Continual Learning

Topology-Preserving Adversarial Training

no code implementations29 Nov 2023 Xiaoyue Mi, Fan Tang, Yepeng Weng, Danding Wang, Juan Cao, Sheng Tang, Peng Li, Yang Liu

Despite the effectiveness in improving the robustness of neural networks, adversarial training has suffered from the natural accuracy degradation problem, i. e., accuracy on natural samples has reduced significantly.

Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models

1 code implementation25 Aug 2023 Chi Chen, Ruoyu Qin, Fuwen Luo, Xiaoyue Mi, Peng Li, Maosong Sun, Yang Liu

However, existing visual instruction tuning methods only utilize image-language instruction data to align the language and image modalities, lacking a more fine-grained cross-modal alignment.

Position

Cannot find the paper you are looking for? You can Submit a new open access paper.