Search Results for author: Praveen Paritosh

Found 9 papers, 2 papers with code

Modeling subjectivity (by Mimicking Annotator Annotation) in toxic comment identification across diverse communities

no code implementations1 Nov 2023 Senjuti Dutta, Sid Mittal, Sherol Chen, Deepak Ramachandran, Ravi Rajakumar, Ian Kivlichan, Sunny Mak, Alena Butryna, Praveen Paritosh

The prevalence and impact of toxic discussions online have made content moderation crucial. Automated systems can play a vital role in identifying toxicity, and reducing the reliance on human moderation. Nevertheless, identifying toxic comments for diverse communities continues to present challenges that are addressed in this paper. The two-part goal of this study is to(1)identify intuitive variances from annotator disagreement using quantitative analysis and (2)model the subjectivity of these viewpoints. To achieve our goal, we published a new dataset\footnote{\url{https://github. com/XXX}} with expert annotators' annotations and used two other public datasets to identify the subjectivity of toxicity. Then leveraging the Large Language Model(LLM), we evaluate the model's ability to mimic diverse viewpoints on toxicity by varying size of the training data and utilizing same set of annotators as the test set used during model training and a separate set of annotators as the test set. We conclude that subjectivity is evident across all annotator groups, demonstrating the shortcomings of majority-rule voting.

Language Modelling Large Language Model

k-Rater Reliability: The Correct Unit of Reliability for Aggregated Human Annotations

no code implementations ACL 2022 Ka Wong, Praveen Paritosh

In these instances, the data reliability is under-reported, and a proposed k-rater reliability (kRR) should be used as the correct data reliability for aggregated datasets.

Data Excellence for AI: Why Should You Care

no code implementations19 Nov 2021 Lora Aroyo, Matthew Lease, Praveen Paritosh, Mike Schaekermann

The efficacy of machine learning (ML) models depends on both algorithms and data.

Cross-replication Reliability - An Empirical Approach to Interpreting Inter-rater Reliability

no code implementations ACL 2021 Ka Wong, Praveen Paritosh, Lora Aroyo

When collecting annotations and labeled data from humans, a standard practice is to use inter-rater reliability (IRR) as a measure of data goodness (Hallgren, 2012).

Benchmarking

Metrology for AI: From Benchmarks to Instruments

1 code implementation5 Nov 2019 Chris Welty, Praveen Paritosh, Lora Aroyo

In this paper we present the first steps towards hardening the science of measuring AI systems, by adopting metrology, the science of measurement and its application, and applying it to human (crowd) powered evaluations.

Word Similarity

Cannot find the paper you are looking for? You can Submit a new open access paper.