1 code implementation • 24 Feb 2024 • Seraphina Goldfarb-Tarrant, Pedro Rodriguez, Jane Dwivedi-Yu, Patrick Lewis
Dense retrievers compress source documents into (possibly lossy) vector representations, yet there is little analysis of what information is lost versus preserved, and how it affects downstream tasks.
no code implementations • 22 May 2023 • Seraphina Goldfarb-Tarrant, Eddie Ungless, Esma Balkir, Su Lin Blodgett
Bias research in NLP seeks to analyse models for social biases, thus helping NLP practitioners uncover, measure, and mitigate social harms.
no code implementations • 22 May 2023 • Seraphina Goldfarb-Tarrant, Björn Ross, Adam Lopez
We also find racial biases to be much more prevalent than gender biases.
no code implementations • 19 May 2023 • Seraphina Goldfarb-Tarrant, Adam Lopez, Roi Blanco, Diego Marcheggiani
To remedy this, we build a counterfactual evaluation corpus for gender and racial/migrant bias in four languages.
2 code implementations • NAACL 2022 • Hadas Orgad, Seraphina Goldfarb-Tarrant, Yonatan Belinkov
Common studies of gender bias in NLP focus either on extrinsic bias measured by model performance on a downstream task or on intrinsic bias found in models' internal representations.
no code implementations • ACL 2021 • Seraphina Goldfarb-Tarrant, Rebecca Marchant, Ricardo Muñoz Sanchez, Mugdha Pandya, Adam Lopez
We urge researchers working on debiasing to focus on extrinsic measures of bias, and to make using these measures more feasible via creation of new challenge sets and annotated test data.
1 code implementation • EMNLP (sdp) 2020 • Seraphina Goldfarb-Tarrant, Alexander Robertson, Jasmina Lazic, Theodora Tsouloufi, Louise Donnison, Karen Smyth
We construct a pipeline that automates each of these aspects, and experiment with many human-time vs. system quality trade-offs.
1 code implementation • EMNLP 2020 • Seraphina Goldfarb-Tarrant, Tuhin Chakrabarty, Ralph Weischedel, Nanyun Peng
Long-form narrative text generated from large language models manages a fluent impersonation of human writing, but only at the local sentence level, and lacks structure or global cohesion.
1 code implementation • NAACL 2019 • Seraphina Goldfarb-Tarrant, Haining Feng, Nanyun Peng
We compare different varieties of interaction in story-writing, story-planning, and diversity controls under time constraints, and show that increased types of human collaboration at both planning and writing stages results in a 10-50% improvement in story quality as compared to less interactive baselines.