1 code implementation • CLASP 2022 • Felix Morger, Stephanie Brandl, Lisa Beinborn, Nora Hollenstein
Relative word importance is a key metric for natural language processing.
1 code implementation • 20 Mar 2024 • Ilias Chalkidis, Stephanie Brandl
Instruction-finetuned Large Language Models inherit clear political leanings that have been shown to influence downstream task performance.
1 code implementation • 29 Feb 2024 • Stephanie Brandl, Oliver Eberle, Tiago Ribeiro, Anders Søgaard, Nora Hollenstein
Rationales in the form of manually annotated input spans usually serve as ground truth when evaluating explainability methods in NLP.
1 code implementation • 26 Oct 2023 • Laura Cabello, Emanuele Bugliarello, Stephanie Brandl, Desmond Elliott
We quantify bias amplification in pretraining and after fine-tuning on three families of vision-and-language models.
no code implementations • 25 Oct 2023 • Stephanie Brandl, Emanuele Bugliarello, Ilias Chalkidis
In order to build reliable and trustworthy NLP applications, models need to be both fair across different demographics and explainable.
no code implementations • 18 Oct 2023 • Oliver Eberle, Ilias Chalkidis, Laura Cabello, Stephanie Brandl
A cross-comparison between model-based rationales and human annotations, both in contrastive and non-contrastive settings, yields a high agreement between the two settings for models as well as for humans.
1 code implementation • 31 Mar 2023 • Tiago Ribeiro, Stephanie Brandl, Anders Søgaard, Nora Hollenstein
We present WebQAmGaze, a multilingual low-cost eye-tracking-while-reading dataset, designed as the first webcam-based eye-tracking corpus of reading to support the development of explainable computational language processing models.
1 code implementation • 6 Oct 2022 • Stephanie Brandl, David Lassner, Anne Baillot, Shinichi Nakajima
Complementary to finding good general word embeddings, an important question for representation learning is to find dynamic word embeddings, e. g., across time or domain.
1 code implementation • 5 Oct 2022 • Stephanie Brandl, Nora Hollenstein
Human fixation patterns have been shown to correlate strongly with Transformer-based attention.
1 code implementation • 3 May 2022 • Stephanie Brandl, Daniel Hershcovich, Anders Søgaard
We argue that we need to evaluate model interpretability methods 'in the wild', i. e., in situations where professionals make critical decisions, and models can potentially assist them.
1 code implementation • ACL 2022 • Stephanie Brandl, Oliver Eberle, Jonas Pilot, Anders Søgaard
We investigate whether self-attention in large-scale pre-trained language models is as predictive of human eye fixation patterns during task-reading as classical cognitive models of human attention.
1 code implementation • NAACL 2022 • Stephanie Brandl, Ruixiang Cui, Anders Søgaard
Gender-neutral pronouns have recently been introduced in many languages to a) include non-binary people and b) as a generic singular.
no code implementations • ACL 2022 • Daniel Hershcovich, Stella Frank, Heather Lent, Miryam de Lhoneux, Mostafa Abdou, Stephanie Brandl, Emanuele Bugliarello, Laura Cabello Piqueras, Ilias Chalkidis, Ruixiang Cui, Constanza Fierro, Katerina Margatina, Phillip Rust, Anders Søgaard
Various efforts in the Natural Language Processing (NLP) community have been made to accommodate linguistic diversity and serve speakers of many different languages.
no code implementations • 14 Jan 2020 • Stephanie Brandl, David Lassner, Maximilian Alber
Word embeddings capture semantic relationships based on contextual information and are the basis for a wide variety of natural language processing applications.