no code implementations • 19 Mar 2024 • Rushi Qiang, Ruiyi Zhang, Pengtao Xie
Low-rank adaptation (LoRA) is a popular method for fine-tuning large-scale pre-trained models in downstream tasks by learning low-rank incremental matrices.
no code implementations • 14 Mar 2024 • Ruiyi Zhang, Rushi Qiang, Sai Ashish Somayajula, Pengtao Xie
Large-scale pretraining followed by task-specific finetuning has achieved great success in various NLP tasks.