no code implementations • 4 Mar 2024 • Nuwa Xi, Yuhan Chen, Sendong Zhao, Haochun Wang, Bing Qin, Ting Liu
Chain-of-Thought (CoT) serves as a critical emerging ability in LLMs, especially when it comes to logical reasoning.
no code implementations • 29 Jan 2024 • Haochun Wang, Sendong Zhao, Zewen Qiang, Nuwa Xi, Bing Qin, Ting Liu
Automatic diagnosis is a significant application of AI in healthcare, where diagnoses are generated based on the symptom description of patients.
1 code implementation • 11 Sep 2023 • Yuhan Chen, Nuwa Xi, Yanrui Du, Haochun Wang, Jianyu Chen, Sendong Zhao, Bing Qin
Furthermore, our method shows a sustained improvement as the volume of pseudo data increases, revealing the great potential of pseudo data in advancing low-resource cross-modal molecule discovery.
1 code implementation • 8 Sep 2023 • Haochun Wang, Sendong Zhao, Zewen Qiang, Zijian Li, Nuwa Xi, Yanrui Du, MuZhen Cai, Haoqiang Guo, Yuhan Chen, Haoming Xu, Bing Qin, Ting Liu
To address this challenge, we propose knowledge-tuning, which leverages structured medical knowledge bases for the LLMs to grasp domain knowledge efficiently and facilitate reliable response generation.
1 code implementation • 8 Sep 2023 • Haochun Wang, Sendong Zhao, Chi Liu, Nuwa Xi, MuZhen Cai, Bing Qin, Ting Liu
Experimental results indicate that even without tuning any parameters, our LLE-INC is on par with automated verbalizers with parameter tuning.
no code implementations • 6 Jul 2023 • Nuwa Xi, Sendong Zhao, Haochun Wang, Chi Liu, Bing Qin, Ting Liu
In this paper, we propose fMRI2text, the first openvocabulary task aiming to bridge fMRI time series and human language.
1 code implementation • 14 Apr 2023 • Haochun Wang, Chi Liu, Nuwa Xi, Zewen Qiang, Sendong Zhao, Bing Qin, Ting Liu
Large Language Models (LLMs), such as the LLaMA model, have demonstrated their effectiveness in various general-domain natural language processing (NLP) tasks.
no code implementations • 12 Apr 2023 • Chi Liu, Haochun Wang, Nuwa Xi, Sendong Zhao, Bing Qin
As a novel approach to tuning pre-trained models, prompt tuning involves freezing the parameters in downstream tasks while inserting trainable embeddings into inputs in the first layer.
1 code implementation • COLING 2022 • Haochun Wang, Chi Liu, Nuwa Xi, Sendong Zhao, Meizhi Ju, Shiwei Zhang, Ziheng Zhang, Yefeng Zheng, Bing Qin, Ting Liu
Prompt-based fine-tuning for pre-trained models has proven effective for many natural language processing tasks under few-shot settings in general domain.