no code implementations • 2 Jul 2024 • Musashi Hinck, Carolin Holtermann, Matthew Lyle Olson, Florian Schneider, Sungduk Yu, Anahita Bhiwandiwalla, Anne Lauscher, ShaoYen Tseng, Vasudev Lal
We uncover a surprising multilingual bias occurring in a popular class of multimodal vision-language models (VLMs).
1 code implementation • 6 Mar 2024 • Carolin Holtermann, Paul Röttger, Timm Dill, Anne Lauscher
Therefore, in this paper, we investigate the basic multilingual capabilities of state-of-the-art open LLMs beyond their intended use.
1 code implementation • 23 Jan 2024 • Carolin Holtermann, Markus Frohmann, Navid Rekabsaz, Anne Lauscher
The knowledge encapsulated in a model is the core factor determining its final performance on downstream tasks.
1 code implementation • 2 Oct 2023 • Markus Frohmann, Carolin Holtermann, Shahed Masoudian, Anne Lauscher, Navid Rekabsaz
We introduce ScaLearn, a simple and highly parameter-efficient two-stage MTL method that capitalizes on the knowledge of the source tasks by learning a minimal set of scaling parameters that enable effective transfer to a target task.
1 code implementation • ACL 2022 • Carolin Holtermann, Anne Lauscher, Simone Paolo Ponzetto
We employ our resource to assess the effect of argumentative fine-tuning and debiasing on the intrinsic bias found in transformer-based language models using a lightweight adapter-based approach that is more sustainable and parameter-efficient than full fine-tuning.