Search Results for author: Luiza Pozzobon

Found 4 papers, 3 papers with code

From One to Many: Expanding the Scope of Toxicity Mitigation in Language Models

1 code implementation6 Mar 2024 Luiza Pozzobon, Patrick Lewis, Sara Hooker, Beyza Ermis

To date, toxicity mitigation in language models has almost entirely been focused on single-language settings.

Cross-Lingual Transfer

Goodtriever: Adaptive Toxicity Mitigation with Retrieval-augmented Models

1 code implementation11 Oct 2023 Luiza Pozzobon, Beyza Ermis, Patrick Lewis, Sara Hooker

Considerable effort has been dedicated to mitigating toxicity, but existing methods often require drastic modifications to model parameters or the use of computationally intensive auxiliary models.

Retrieval Text Generation

When Less is More: Investigating Data Pruning for Pretraining LLMs at Scale

no code implementations8 Sep 2023 Max Marion, Ahmet Üstün, Luiza Pozzobon, Alex Wang, Marzieh Fadaee, Sara Hooker

In this work, we take a wider view and explore scalable estimates of data quality that can be used to systematically measure the quality of pretraining data.

Memorization

On the Challenges of Using Black-Box APIs for Toxicity Evaluation in Research

1 code implementation24 Apr 2023 Luiza Pozzobon, Beyza Ermis, Patrick Lewis, Sara Hooker

We evaluate the implications of these changes on the reproducibility of findings that compare the relative merits of models and methods that aim to curb toxicity.

Cannot find the paper you are looking for? You can Submit a new open access paper.