Search Results for author: Chengshun Shi

Found 1 papers, 0 papers with code

Mini-Ensemble Low-Rank Adapters for Parameter-Efficient Fine-Tuning

no code implementations27 Feb 2024 Pengjie Ren, Chengshun Shi, Shiguang Wu, Mengqi Zhang, Zhaochun Ren, Maarten de Rijke, Zhumin Chen, Jiahuan Pei

Parameter-efficient fine-tuning (PEFT) is a popular method for tailoring pre-trained large language models (LLMs), especially as the models' scale and the diversity of tasks increase.

Instruction Following Natural Language Understanding

Cannot find the paper you are looking for? You can Submit a new open access paper.