no code implementations • 15 Jul 2023 • Organizers Of QueerInAI, Nathan Dennler, Anaelia Ovalle, Ashwin Singh, Luca Soldaini, Arjun Subramonian, Huy Tu, William Agnew, Avijit Ghosh, Kyra Yee, Irene Font Peradejordi, Zeerak Talat, Mayra Russo, Jess de Jesus de Pinho Pinhal
However, these auditing processes have been criticized for their failure to integrate the knowledge of marginalized communities and consider the power dynamics between auditors and the communities.
Harmful content detection models tend to have higher false positive rates for content from marginalized groups.
Traditionally, recommender systems operate by returning a user a set of items, ranked in order of estimated relevance to that user.
However, we demonstrate that formalized fairness metrics and quantitative analysis on their own are insufficient for capturing the risk of representational harm in automatic cropping.
Pre-training models on vast quantities of unlabeled data has emerged as an effective approach to improving accuracy on many NLP tasks.
Ranked #1 on Machine Translation on WMT2016 Romanian-English (using extra training data)
Previous work on neural noisy channel modeling relied on latent variable models that incrementally process the source and target sentence.
This paper describes Facebook FAIR's submission to the WMT19 shared news translation task.
Ranked #1 on Machine Translation on WMT2019 English-German