no code implementations • 23 Mar 2024 • Xin Zhang, Tianjie Ju, Huijia Liang, Ying Fu, Qin Zhang
The interest in updating Large Language Models (LLMs) without retraining from scratch is substantial, yet it comes with some challenges. This is especially true for situations demanding complex reasoning with limited samples, a scenario we refer to as the Paucity-Constrained Complex Reasoning Adaptation for LLMs (PCRA-LLM). Traditional methods like Low-Rank Adaptation (LoRA) and Retrieval-Augmented Generation (RAG) are inadequate for this critical issue, particularly evident in our exploration of a specific medical context that epitomize the PCRA-LLM's distinct needs. To address the issue, we propose a Sequential Fusion method to incorporate knowledge from complex context into LLMs.