no code implementations • 20 Jul 2022 • Mark Mazumder, Colby Banbury, Xiaozhe Yao, Bojan Karlaš, William Gaviria Rojas, Sudnya Diamos, Greg Diamos, Lynn He, Douwe Kiela, David Jurado, David Kanter, Rafael Mosquera, Juan Ciro, Lora Aroyo, Bilge Acun, Sabri Eyuboglu, Amirata Ghorbani, Emmett Goodman, Tariq Kane, Christine R. Kirkpatrick, Tzu-Sheng Kuo, Jonas Mueller, Tristan Thrush, Joaquin Vanschoren, Margaret Warren, Adina Williams, Serena Yeung, Newsha Ardalani, Praveen Paritosh, Ce Zhang, James Zou, Carole-Jean Wu, Cody Coleman, Andrew Ng, Peter Mattson, Vijay Janapa Reddi
Machine learning (ML) research has generally focused on models, while the most prominent datasets have been employed for everyday ML tasks without regard for the breadth, difficulty, and faithfulness of these datasets to the underlying problem.
no code implementations • ACL 2022 • Ka Wong, Praveen Paritosh
In these instances, the data reliability is under-reported, and a proposed k-rater reliability (kRR) should be used as the correct data reliability for aggregated datasets.
no code implementations • 19 Nov 2021 • Lora Aroyo, Matthew Lease, Praveen Paritosh, Mike Schaekermann
The efficacy of machine learning (ML) models depends on both algorithms and data.
no code implementations • ACL 2021 • Ka Wong, Praveen Paritosh, Lora Aroyo
When collecting annotations and labeled data from humans, a standard practice is to use inter-rater reliability (IRR) as a measure of data goodness (Hallgren, 2012).
no code implementations • 11 Jun 2021 • Ka Wong, Praveen Paritosh, Lora Aroyo
We present a new approach to interpreting IRR that is empirical and contextualized.
no code implementations • 5 Nov 2019 • Chris Welty, Praveen Paritosh, Lora Aroyo
In this paper we present the first steps towards hardening the science of measuring AI systems, by adopting metrology, the science of measurement and its application, and applying it to human (crowd) powered evaluations.