no code implementations • 27 Feb 2024 • Yuanhang Yang, shiyi qi, Wenchao Gu, Chaozheng Wang, Cuiyun Gao, Zenglin Xu
To address this issue, we present \tool, a novel MoE designed to enhance both the efficacy and efficiency of sparse MoE models.
no code implementations • 7 Dec 2023 • Zongjie Li, Chaozheng Wang, Chaowei Liu, Pingchuan Ma, Daoyuan Wu, Shuai Wang, Cuiyun Gao
With recent advancements in Large Multimodal Models (LMMs) across various domains, a novel prompting method called visual referring prompting has emerged, showing significant potential in enhancing human-computer interaction within multimodal systems.
no code implementations • 29 Sep 2023 • Zongjie Li, Chaozheng Wang, Pingchuan Ma, Daoyuan Wu, Shuai Wang, Cuiyun Gao, Yang Liu
Specifically, PORTIA splits the answers into multiple segments, aligns similar content across candidate answers, and then merges them back into a single prompt for evaluation by LLMs.
1 code implementation • 24 Jul 2022 • Chaozheng Wang, Yuanhang Yang, Cuiyun Gao, Yun Peng, Hongyu Zhang, Michael R. Lyu
Besides, the performance of fine-tuning strongly relies on the amount of downstream data, while in practice, the scenarios with scarce data are common.
no code implementations • 9 Nov 2021 • Chaozheng Wang, Shuzheng Gao, Cuiyun Gao, Pengyun Wang, Wenjie Pei, Lujia Pan, Zenglin Xu
Real-world data usually present long-tailed distributions.
2 code implementations • ICML 2020 • Xiao Li, Chenghua Lin, Ruizhe Li, Chaozheng Wang, Frank Guerin
We demonstrate the utility of our method for attribute manipulation in autoencoders trained across varied domains, using both human evaluation and automated methods.
Ranked #7 on Image Generation on CelebA 256x256 (FID metric)