1 code implementation • 4 Sep 2024 • Junwei Liu, Kaixin Wang, Yixuan Chen, Xin Peng, Zhenpeng Chen, Lingming Zhang, Yiling Lou
The recent advance in Large Language Models (LLMs) has shaped a new paradigm of AI agents, i. e., LLM-based agents.
1 code implementation • 26 Jun 2024 • Xiaoshuang Huang, Haifeng Huang, Lingdong Shen, Yehui Yang, Fangxin Shang, Junwei Liu, Jia Liu
Additionally, we introduce a Refer-and-Ground Multimodal Large Language Model for Biomedicine (BiRD) by using this dataset and multi-task instruction learning.
1 code implementation • 1 Dec 2023 • Fangxin Shang, Jie Fu, Yehui Yang, Haifeng Huang, Junwei Liu, Lei Ma
Large-scale public datasets with high-quality annotations are rarely available for intelligent medical imaging research, due to data privacy concerns and the cost of annotations.
1 code implementation • 3 Aug 2023 • Xueying Du, Mingwei Liu, Kaixin Wang, Hanlin Wang, Junwei Liu, Yixuan Chen, Jiayi Feng, Chaofeng Sha, Xin Peng, Yiling Lou
Third, we find that generating the entire class all at once (i. e. holistic generation strategy) is the best generation strategy only for GPT-4 and GPT-3. 5, while method-by-method generation (i. e. incremental and compositional) is better strategies for the other models with limited ability of understanding long instructions and utilizing the middle information.
no code implementations • 2 Aug 2023 • Zhiqiang Yuan, Junwei Liu, Qiancheng Zi, Mingwei Liu, Xin Peng, Yiling Lou
First, for the zero-shot setting, instructed LLMs are very competitive on code comprehension and generation tasks and sometimes even better than small SOTA models specifically fine-tuned on each downstream task.
1 code implementation • 3 Dec 2022 • Wenzhe Jia, Yuan Cao, Junwei Liu, Jie Gui
When a new query arrives, only the binary codes of the corresponding potential neighbors are updated.
1 code implementation • 31 Jul 2020 • Xu Sun, Huihui Fang, Yehui Yang, Dongwei Zhu, Lei Wang, Junwei Liu, Yanwu Xu
In this paper, we propose two new data augmentation modules, namely, channel-wise random Gamma correction and channel-wise random vessel augmentation.