no code implementations • 23 Mar 2024 • Xin Zhang, Tianjie Ju, Huijia Liang, Ying Fu, Qin Zhang
The interest in updating Large Language Models (LLMs) without retraining from scratch is substantial, yet it comes with some challenges. This is especially true for situations demanding complex reasoning with limited samples, a scenario we refer to as the Paucity-Constrained Complex Reasoning Adaptation for LLMs (PCRA-LLM). Traditional methods like Low-Rank Adaptation (LoRA) and Retrieval-Augmented Generation (RAG) are inadequate for this critical issue, particularly evident in our exploration of a specific medical context that epitomize the PCRA-LLM's distinct needs. To address the issue, we propose a Sequential Fusion method to incorporate knowledge from complex context into LLMs.
no code implementations • 19 Mar 2024 • Yubin Zheng, Peng Tang, Tianjie Ju, Weidong Qiu, Bo Yan
The intra-client and inter-client consistency learning are introduced to smooth predictions at the data level and avoid confirmation bias of local models.
1 code implementation • 25 Feb 2024 • Tianjie Ju, Weiwei Sun, Wei Du, Xinwei Yuan, Zhaochun Ren, Gongshen Liu
Previous work has showcased the intriguing capability of large language models (LLMs) in retrieving facts and processing context knowledge.
no code implementations • 19 Feb 2024 • Tianjie Ju, Yijin Chen, Xinwei Yuan, Zhuosheng Zhang, Wei Du, Yubin Zheng, Gongshen Liu
Recent work has showcased the powerful capability of large language models (LLMs) in recalling knowledge and reasoning.
no code implementations • 8 Feb 2024 • Xinbei Ma, Tianjie Ju, Jiyang Qiu, Zhuosheng Zhang, Hai Zhao, Lifeng Liu, Yulong Wang
Q3: Which knowledge features are correlated with the performance and robustness of editing?