Search Results for author: Stephanie Brandl

Found 14 papers, 10 papers with code

Llama meets EU: Investigating the European Political Spectrum through the Lens of LLMs

1 code implementation20 Mar 2024 Ilias Chalkidis, Stephanie Brandl

Instruction-finetuned Large Language Models inherit clear political leanings that have been shown to influence downstream task performance.

Evaluating Webcam-based Gaze Data as an Alternative for Human Rationale Annotations

1 code implementation29 Feb 2024 Stephanie Brandl, Oliver Eberle, Tiago Ribeiro, Anders Søgaard, Nora Hollenstein

Rationales in the form of manually annotated input spans usually serve as ground truth when evaluating explainability methods in NLP.

valid

Evaluating Bias and Fairness in Gender-Neutral Pretrained Vision-and-Language Models

1 code implementation26 Oct 2023 Laura Cabello, Emanuele Bugliarello, Stephanie Brandl, Desmond Elliott

We quantify bias amplification in pretraining and after fine-tuning on three families of vision-and-language models.

Fairness Retrieval

On the Interplay between Fairness and Explainability

no code implementations25 Oct 2023 Stephanie Brandl, Emanuele Bugliarello, Ilias Chalkidis

In order to build reliable and trustworthy NLP applications, models need to be both fair across different demographics and explainable.

Fairness Multi Class Text Classification +2

Rather a Nurse than a Physician -- Contrastive Explanations under Investigation

no code implementations18 Oct 2023 Oliver Eberle, Ilias Chalkidis, Laura Cabello, Stephanie Brandl

A cross-comparison between model-based rationales and human annotations, both in contrastive and non-contrastive settings, yields a high agreement between the two settings for models as well as for humans.

text-classification Text Classification

WebQAmGaze: A Multilingual Webcam Eye-Tracking-While-Reading Dataset

1 code implementation31 Mar 2023 Tiago Ribeiro, Stephanie Brandl, Anders Søgaard, Nora Hollenstein

We present WebQAmGaze, a multilingual low-cost eye-tracking-while-reading dataset, designed as the first webcam-based eye-tracking corpus of reading to support the development of explainable computational language processing models.

Question Answering

Domain-Specific Word Embeddings with Structure Prediction

1 code implementation6 Oct 2022 Stephanie Brandl, David Lassner, Anne Baillot, Shinichi Nakajima

Complementary to finding good general word embeddings, an important question for representation learning is to find dynamic word embeddings, e. g., across time or domain.

Philosophy Representation Learning +1

Every word counts: A multilingual analysis of individual human alignment with model attention

1 code implementation5 Oct 2022 Stephanie Brandl, Nora Hollenstein

Human fixation patterns have been shown to correlate strongly with Transformer-based attention.

Evaluating Deep Taylor Decomposition for Reliability Assessment in the Wild

1 code implementation3 May 2022 Stephanie Brandl, Daniel Hershcovich, Anders Søgaard

We argue that we need to evaluate model interpretability methods 'in the wild', i. e., in situations where professionals make critical decisions, and models can potentially assist them.

Decision Making

Do Transformer Models Show Similar Attention Patterns to Task-Specific Human Gaze?

1 code implementation ACL 2022 Stephanie Brandl, Oliver Eberle, Jonas Pilot, Anders Søgaard

We investigate whether self-attention in large-scale pre-trained language models is as predictive of human eye fixation patterns during task-reading as classical cognitive models of human attention.

Relation Extraction Sentiment Analysis

How Conservative are Language Models? Adapting to the Introduction of Gender-Neutral Pronouns

1 code implementation NAACL 2022 Stephanie Brandl, Ruixiang Cui, Anders Søgaard

Gender-neutral pronouns have recently been introduced in many languages to a) include non-binary people and b) as a generic singular.

Balancing the composition of word embeddings across heterogenous data sets

no code implementations14 Jan 2020 Stephanie Brandl, David Lassner, Maximilian Alber

Word embeddings capture semantic relationships based on contextual information and are the basis for a wide variety of natural language processing applications.

Word Embeddings Word Similarity

Cannot find the paper you are looking for? You can Submit a new open access paper.