Search Results for author: Lucy Vasserman

Found 7 papers, 3 papers with code

Lost in Distillation: A Case Study in Toxicity Modeling

no code implementations NAACL (WOAH) 2022 Alyssa Chvasta, Alyssa Lees, Jeffrey Sorensen, Lucy Vasserman, Nitesh Goyal

In an era of increasingly large pre-trained language models, knowledge distillation is a powerful tool for transferring information from a large model to a smaller one.

Knowledge Distillation

Is Your Toxicity My Toxicity? Exploring the Impact of Rater Identity on Toxicity Annotation

no code implementations1 May 2022 Nitesh Goyal, Ian Kivlichan, Rachel Rosen, Lucy Vasserman

Next, we trained models on the annotations from each of the different rater pools, and compared the scores of these models on comments from several test sets.

Nuanced Metrics for Measuring Unintended Bias with Real Data for Text Classification

4 code implementations11 Mar 2019 Daniel Borkan, Lucas Dixon, Jeffrey Sorensen, Nithum Thain, Lucy Vasserman

Unintended bias in Machine Learning can manifest as systemic differences in performance for different demographic groups, potentially compounding existing challenges to fairness in society at large.

BIG-bench Machine Learning Fairness +2

Model Cards for Model Reporting

12 code implementations5 Oct 2018 Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, Timnit Gebru

Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.