Search Results for author: Ka Wong

Found 3 papers, 0 papers with code

k-Rater Reliability: The Correct Unit of Reliability for Aggregated Human Annotations

no code implementations ACL 2022 Ka Wong, Praveen Paritosh

In these instances, the data reliability is under-reported, and a proposed k-rater reliability (kRR) should be used as the correct data reliability for aggregated datasets.

Cross-replication Reliability - An Empirical Approach to Interpreting Inter-rater Reliability

no code implementations ACL 2021 Ka Wong, Praveen Paritosh, Lora Aroyo

When collecting annotations and labeled data from humans, a standard practice is to use inter-rater reliability (IRR) as a measure of data goodness (Hallgren, 2012).

Benchmarking

Cannot find the paper you are looking for? You can Submit a new open access paper.