1 code implementation • 26 Oct 2023 • Laura Cabello, Emanuele Bugliarello, Stephanie Brandl, Desmond Elliott
We quantify bias amplification in pretraining and after fine-tuning on three families of vision-and-language models.
no code implementations • 18 Oct 2023 • Oliver Eberle, Ilias Chalkidis, Laura Cabello, Stephanie Brandl
A cross-comparison between model-based rationales and human annotations, both in contrastive and non-contrastive settings, yields a high agreement between the two settings for models as well as for humans.
no code implementations • 5 Jun 2023 • Laura Cabello, Jiaang Li, Ilias Chalkidis
We then evaluate its ability to acquire new knowledge and include it in its reasoning process.
1 code implementation • 1 Jun 2023 • Terne Sasha Thorn Jakobsen, Laura Cabello, Anders Søgaard
Explainability methods are used to benchmark the extent to which model predictions align with human rationales i. e., are 'right for the right reasons'.
no code implementations • 20 Apr 2023 • Laura Cabello, Anna Katrine Jørgensen, Anders Søgaard
To this end, we first provide a thought experiment, showing how association bias and empirical fairness can be completely orthogonal.
no code implementations • 31 Mar 2023 • Li Zhou, Laura Cabello, Yong Cao, Daniel Hershcovich
Detecting offensive language is a challenging task.
Cultural Vocal Bursts Intensity Prediction Few-Shot Learning +1
1 code implementation • 30 Mar 2023 • Yong Cao, Li Zhou, Seolhwa Lee, Laura Cabello, Min Chen, Daniel Hershcovich
The recent release of ChatGPT has garnered widespread recognition for its exceptional ability to generate human-like responses in dialogue.