no code implementations • NLPerspectives (LREC) 2022 • Christopher Homan, Tharindu Cyril Weerasooriya, Lora Aroyo, Chris Welty
Annotator disagreement is often dismissed as noise or the result of poor annotation process quality.
1 code implementation • 7 Jul 2023 • Tharindu Cyril Weerasooriya, Sarah Luger, Saloni Poddar, Ashiqur R. KhudaBukhsh, Christopher M. Homan
Human-annotated data plays a critical role in the fairness of AI systems, including those that deal with life-altering decisions or moderating human-created web/social media content.
1 code implementation • Findings of the Association for Computational Linguistics: ACL 2023 2023 • Tharindu Cyril Weerasooriya, Alexander Ororbia, Raj Bhensadadia, Ashiqur KhudaBukhsh, Christopher Homan
Annotator disagreement is common whenever human judgment is needed for supervised learning.
2 code implementations • 29 Jan 2023 • Tharindu Cyril Weerasooriya, Sujan Dutta, Tharindu Ranasinghe, Marcos Zampieri, Christopher M. Homan, Ashiqur R. KhudaBukhsh
For (2), we introduce a first-of-its-kind dataset of vicarious offense.
no code implementations • NLPerspectives (LREC) 2022 • Tharindu Cyril Weerasooriya, Alexander G. Ororbia, Christopher M. Homan
We propose a fully Bayesian framework for learning ground truth labels from noisy annotators.
1 code implementation • 16 Mar 2020 • Tharindu Cyril Weerasooriya, Tong Liu, Christopher M. Homan
Supervised machine learning often requires human-annotated data.