Search Results for author: Cooper Raterink

Found 2 papers, 0 papers with code

Mitigating harm in language models with conditional-likelihood filtration

no code implementations4 Aug 2021 Helen Ngo, Cooper Raterink, João G. M. Araújo, Ivan Zhang, Carol Chen, Adrien Morisot, Nicholas Frosst

Language models trained on large-scale unfiltered datasets curated from the open web acquire systemic biases, prejudices, and harmful views from their training data.

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.