Search Results for author: Carolin Holtermann

Found 4 papers, 4 papers with code

Evaluating the Elementary Multilingual Capabilities of Large Language Models with MultiQ

1 code implementation6 Mar 2024 Carolin Holtermann, Paul Röttger, Timm Dill, Anne Lauscher

Therefore, in this paper, we investigate the basic multilingual capabilities of state-of-the-art open LLMs beyond their intended use.

Open-Ended Question Answering

What the Weight?! A Unified Framework for Zero-Shot Knowledge Composition

1 code implementation23 Jan 2024 Carolin Holtermann, Markus Frohmann, Navid Rekabsaz, Anne Lauscher

The knowledge encapsulated in a model is the core factor determining its final performance on downstream tasks.

Benchmarking

ScaLearn: Simple and Highly Parameter-Efficient Task Transfer by Learning to Scale

1 code implementation2 Oct 2023 Markus Frohmann, Carolin Holtermann, Shahed Masoudian, Anne Lauscher, Navid Rekabsaz

We introduce ScaLearn, a simple and highly parameter-efficient two-stage MTL method that capitalizes on the knowledge of the source tasks by learning a minimal set of scaling parameters that enable effective knowledge transfer to a target task.

Multi-Task Learning

Fair and Argumentative Language Modeling for Computational Argumentation

1 code implementation ACL 2022 Carolin Holtermann, Anne Lauscher, Simone Paolo Ponzetto

We employ our resource to assess the effect of argumentative fine-tuning and debiasing on the intrinsic bias found in transformer-based language models using a lightweight adapter-based approach that is more sustainable and parameter-efficient than full fine-tuning.

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.