Search Results for author: Laura Cabello

Found 7 papers, 3 papers with code

Evaluating Bias and Fairness in Gender-Neutral Pretrained Vision-and-Language Models

1 code implementation26 Oct 2023 Laura Cabello, Emanuele Bugliarello, Stephanie Brandl, Desmond Elliott

We quantify bias amplification in pretraining and after fine-tuning on three families of vision-and-language models.

Fairness Retrieval

Rather a Nurse than a Physician -- Contrastive Explanations under Investigation

no code implementations18 Oct 2023 Oliver Eberle, Ilias Chalkidis, Laura Cabello, Stephanie Brandl

A cross-comparison between model-based rationales and human annotations, both in contrastive and non-contrastive settings, yields a high agreement between the two settings for models as well as for humans.

text-classification Text Classification

Being Right for Whose Right Reasons?

1 code implementation1 Jun 2023 Terne Sasha Thorn Jakobsen, Laura Cabello, Anders Søgaard

Explainability methods are used to benchmark the extent to which model predictions align with human rationales i. e., are 'right for the right reasons'.

Common Sense Reasoning Fairness +1

On the Independence of Association Bias and Empirical Fairness in Language Models

no code implementations20 Apr 2023 Laura Cabello, Anna Katrine Jørgensen, Anders Søgaard

To this end, we first provide a thought experiment, showing how association bias and empirical fairness can be completely orthogonal.

Fairness

Assessing Cross-Cultural Alignment between ChatGPT and Human Societies: An Empirical Study

1 code implementation30 Mar 2023 Yong Cao, Li Zhou, Seolhwa Lee, Laura Cabello, Min Chen, Daniel Hershcovich

The recent release of ChatGPT has garnered widespread recognition for its exceptional ability to generate human-like responses in dialogue.

Cultural Vocal Bursts Intensity Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.