Search Results for author: Seraphina Goldfarb-Tarrant

Found 11 papers, 5 papers with code

The Multilingual Alignment Prism: Aligning Global and Local Preferences to Reduce Harm

no code implementations26 Jun 2024 Aakanksha, Arash Ahmadian, Beyza Ermis, Seraphina Goldfarb-Tarrant, Julia Kreutzer, Marzieh Fadaee, Sara Hooker

We collect the first set of human annotated red-teaming prompts in different languages distinguishing between global and local harm, which serve as a laboratory for understanding the reliability of alignment techniques when faced with preference distributions that are non-stationary across geographies and languages.

Cross-Lingual Transfer

MultiContrievers: Analysis of Dense Retrieval Representations

1 code implementation24 Feb 2024 Seraphina Goldfarb-Tarrant, Pedro Rodriguez, Jane Dwivedi-Yu, Patrick Lewis

Dense retrievers compress source documents into (possibly lossy) vector representations, yet there is little analysis of what information is lost versus preserved, and how it affects downstream tasks.

Retrieval

This Prompt is Measuring <MASK>: Evaluating Bias Evaluation in Language Models

no code implementations22 May 2023 Seraphina Goldfarb-Tarrant, Eddie Ungless, Esma Balkir, Su Lin Blodgett

Bias research in NLP seeks to analyse models for social biases, thus helping NLP practitioners uncover, measure, and mitigate social harms.

Experimental Design

How Gender Debiasing Affects Internal Model Representations, and Why It Matters

2 code implementations NAACL 2022 Hadas Orgad, Seraphina Goldfarb-Tarrant, Yonatan Belinkov

Common studies of gender bias in NLP focus either on extrinsic bias measured by model performance on a downstream task or on intrinsic bias found in models' internal representations.

Intrinsic Bias Metrics Do Not Correlate with Application Bias

no code implementations ACL 2021 Seraphina Goldfarb-Tarrant, Rebecca Marchant, Ricardo Muñoz Sanchez, Mugdha Pandya, Adam Lopez

We urge researchers working on debiasing to focus on extrinsic measures of bias, and to make using these measures more feasible via creation of new challenge sets and annotated test data.

Word Embeddings

Content Planning for Neural Story Generation with Aristotelian Rescoring

1 code implementation EMNLP 2020 Seraphina Goldfarb-Tarrant, Tuhin Chakrabarty, Ralph Weischedel, Nanyun Peng

Long-form narrative text generated from large language models manages a fluent impersonation of human writing, but only at the local sentence level, and lacks structure or global cohesion.

Language Modelling Sentence +1

Plan, Write, and Revise: an Interactive System for Open-Domain Story Generation

1 code implementation NAACL 2019 Seraphina Goldfarb-Tarrant, Haining Feng, Nanyun Peng

We compare different varieties of interaction in story-writing, story-planning, and diversity controls under time constraints, and show that increased types of human collaboration at both planning and writing stages results in a 10-50% improvement in story quality as compared to less interactive baselines.

Diversity Story Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.