1 code implementation • EMNLP (BlackboxNLP) 2021 • Jenny Kunz, Marco Kuhlmann
Previous work on probing word representations for linguistic knowledge has focused on interpolation tasks.
1 code implementation • COLING 2022 • Jenny Kunz, Marco Kuhlmann
Probing studies have extensively explored where in neural language models linguistic information is located.
1 code implementation • 10 Jan 2025 • Romina Oji, Jenny Kunz
This paper investigates the optimal use of the multilingual encoder model mDeBERTa for tasks in three Germanic languages -- German, Swedish, and Icelandic -- representing varying levels of presence and likely data quality in mDeBERTas pre-training data.
1 code implementation • 17 Dec 2024 • Jenny Kunz
Smaller LLMs still face significant challenges even in medium-resourced languages, particularly when it comes to language-specific knowledge -- a problem not easily resolved with machine-translated data.
no code implementations • 16 Feb 2024 • Jenny Kunz, Marco Kuhlmann
The properties of the generated explanations are influenced by the pre-training corpus and by the target data used for instruction fine-tuning.
1 code implementation • 7 Feb 2024 • Marc Braun, Jenny Kunz
The self-rationalising capabilities of LLMs are appealing because the generated explanations can give insights into the plausibility of the predictions.
1 code implementation • 31 Jan 2024 • Jenny Kunz, Oskar Holmström
Modular deep learning has been proposed for the efficient adaption of pre-trained models to new tasks, domains and languages.
Natural Language Understanding
Zero-Shot Cross-Lingual Transfer
1 code implementation • COLING 2020 • Jenny Kunz, Marco Kuhlmann
Classifiers trained on auxiliary probing tasks are a popular tool to analyze the representations learned by neural sentence encoders such as BERT and ELMo.
no code implementations • WS 2019 • Jenny Kunz, Christian Hardmeier
We explore different approaches to explicit entity modelling in language models (LM).