1 code implementation • 5 Dec 2023 • Max Klabunde, Mehdi Ben Amor, Michael Granitzer, Florian Lemmerich
Understanding the similarity of the numerous released large language models (LLMs) has many uses, e. g., simplifying model selection, detecting illegal model reuse, and advancing our understanding of what makes LLMs perform well.
1 code implementation • 26 Apr 2023 • Mehdi Ben Amor, Michael Granitzer, Jelena Mitrović
Therefore, we conduct an in-depth evaluation of the impact of position bias on the performance of LMs when fine-tuned on token classification benchmarks.