Search Results for author: Yuqi Lin

Found 9 papers, 5 papers with code

SAMRefiner: Taming Segment Anything Model for Universal Mask Refinement

1 code implementation10 Feb 2025 Yuqi Lin, Hengjia Li, Wenqi Shao, Zheng Yang, Jun Zhao, Xiaofei He, Ping Luo, Kaipeng Zhang

In contrast to prior refinement techniques that are tailored to specific models or tasks in a close-world manner, we propose SAMRefiner, a universal and efficient approach by adapting SAM to the mask refinement task.

Semantic Segmentation

Position: Towards Implicit Prompt For Text-To-Image Models

no code implementations4 Mar 2024 Yue Yang, Yuqi Lin, Hong Liu, Wenqi Shao, Runjian Chen, Hailong Shang, Yu Wang, Yu Qiao, Kaipeng Zhang, Ping Luo

We call for increased attention to the potential and risks of implicit prompts in the T2I community and further investigation into the capabilities and impacts of implicit prompts, advocating for a balanced approach that harnesses their benefits while mitigating their risks.

Position

UniHDA: A Unified and Versatile Framework for Multi-Modal Hybrid Domain Adaptation

no code implementations23 Jan 2024 Hengjia Li, Yang Liu, Yuqi Lin, Zhanwei Zhang, Yibo Zhao, weihang Pan, Tu Zheng, Zheng Yang, Yuchun Jiang, Boxi Wu, Deng Cai

In this paper, we propose UniHDA, a \textbf{unified} and \textbf{versatile} framework for generative hybrid domain adaptation with multi-modal references from multiple domains.

Attribute Diversity +1

Few-shot Hybrid Domain Adaptation of Image Generators

1 code implementation30 Oct 2023 Hengjia Li, Yang Liu, Linxuan Xia, Yuqi Lin, Tu Zheng, Zheng Yang, Wenxiao Wang, Xiaohui Zhong, Xiaobo Ren, Xiaofei He

Concretely, the distance loss blends the attributes of all target domains by reducing the distances from generated images to all target subspaces.

Domain Adaptation Semantic Similarity +1

Self-supervised and Weakly Supervised Contrastive Learning for Frame-wise Action Representations

no code implementations6 Dec 2022 Minghao Chen, Renbo Tu, Chenxi Huang, Yuqi Lin, Boxi Wu, Deng Cai

In this paper, we introduce a new framework of contrastive action representation learning (CARL) to learn frame-wise action representation in a self-supervised or weakly-supervised manner, especially for long videos.

Action Classification Contrastive Learning +4

Cannot find the paper you are looking for? You can Submit a new open access paper.