no code implementations • NAACL (WOAH) 2022 • Alyssa Chvasta, Alyssa Lees, Jeffrey Sorensen, Lucy Vasserman, Nitesh Goyal
In an era of increasingly large pre-trained language models, knowledge distillation is a powerful tool for transferring information from a large model to a smaller one.
no code implementations • 1 May 2022 • Nitesh Goyal, Ian Kivlichan, Rachel Rosen, Lucy Vasserman
Next, we trained models on the annotations from each of the different rater pools, and compared the scores of these models on comments from several test sets.
no code implementations • 22 Feb 2022 • Alyssa Lees, Vinh Q. Tran, Yi Tay, Jeffrey Sorensen, Jai Gupta, Donald Metzler, Lucy Vasserman
As such, it is crucial to develop models that are effective across a diverse range of languages, usages, and styles.
1 code implementation • ACL (WOAH) 2021 • Ian D. Kivlichan, Zi Lin, Jeremiah Liu, Lucy Vasserman
Content moderation is often performed by a collaboration between humans and machine learning models.
4 code implementations • 11 Mar 2019 • Daniel Borkan, Lucas Dixon, Jeffrey Sorensen, Nithum Thain, Lucy Vasserman
Unintended bias in Machine Learning can manifest as systemic differences in performance for different demographic groups, potentially compounding existing challenges to fairness in society at large.
no code implementations • 5 Mar 2019 • Daniel Borkan, Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, Lucy Vasserman
This report examines the Pinned AUC metric introduced and highlights some of its limitations.
12 code implementations • 5 Oct 2018 • Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, Timnit Gebru
Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information.