no code implementations • 4 Mar 2024 • Feihu Jin, Yin Liu, Ying Tan
Parameter-efficient tuning methods such as LoRA could achieve comparable performance to model tuning by tuning a small portion of the parameters.
1 code implementation • 8 Feb 2024 • Feihu Jin, Yifan Liu, Ying Tan
Large Language Models (LLMs) have demonstrated remarkable performance across diverse tasks and exhibited impressive reasoning abilities by applying zero-shot Chain-of-Thought (CoT) prompting.
1 code implementation • 18 Jan 2022 • Feihu Jin, Jinliang Lu, Jiajun Zhang, Chengqing Zong
Specifically, we suppose that each learnable prompt token has a different contribution to different instances, and we learn the contribution by calculating the relevance score between an instance and each prompt token.