no code implementations • 2 May 2024 • Andrew Parry, Debasis Ganguly, Manish Chandra
With the increasing ability of large language models (LLMs), in-context learning (ICL) has evolved as a new paradigm for natural language processing (NLP), where instead of fine-tuning the parameters of an LLM specific to a downstream task with labeled examples, a small number of such examples is appended to a prompt instruction for controlling the decoder's generation process.
no code implementations • 11 Mar 2024 • Manish Chandra, Debasis Ganguly, Yiwen Li, Iadh Ounis
While existing work uses a static number of examples during inference for each data instance, in this paper we propose a novel methodology of dynamically adapting the number of examples as per the data.