Search Results for author: Sherol Chen

Found 6 papers, 0 papers with code

Modeling subjectivity (by Mimicking Annotator Annotation) in toxic comment identification across diverse communities

no code implementations1 Nov 2023 Senjuti Dutta, Sid Mittal, Sherol Chen, Deepak Ramachandran, Ravi Rajakumar, Ian Kivlichan, Sunny Mak, Alena Butryna, Praveen Paritosh

The prevalence and impact of toxic discussions online have made content moderation crucial. Automated systems can play a vital role in identifying toxicity, and reducing the reliance on human moderation. Nevertheless, identifying toxic comments for diverse communities continues to present challenges that are addressed in this paper. The two-part goal of this study is to(1)identify intuitive variances from annotator disagreement using quantitative analysis and (2)model the subjectivity of these viewpoints. To achieve our goal, we published a new dataset\footnote{\url{https://github. com/XXX}} with expert annotators' annotations and used two other public datasets to identify the subjectivity of toxicity. Then leveraging the Large Language Model(LLM), we evaluate the model's ability to mimic diverse viewpoints on toxicity by varying size of the training data and utilizing same set of annotators as the test set used during model training and a separate set of annotators as the test set. We conclude that subjectivity is evident across all annotator groups, demonstrating the shortcomings of majority-rule voting.

Language Modelling Large Language Model

Leveraging Contextual Counterfactuals Toward Belief Calibration

no code implementations13 Jul 2023 Qiuyi, Zhang, Michael S. Lee, Sherol Chen

Beliefs and values are increasingly being incorporated into our AI systems through alignment processes, such as carefully curating data collection principles or regularizing the loss function used for training.

counterfactual Counterfactual Reasoning

Story Centaur: Large Language Model Few Shot Learning as a Creative Writing Tool

no code implementations EACL 2021 Ben Swanson, Kory Mathewson, Ben Pietrzak, Sherol Chen, Monica Dinalescu

Few shot learning with large language models has the potential to give individuals without formal machine learning training the access to a wide range of text to text models.

Few-Shot Learning Language Modelling +1

Cannot find the paper you are looking for? You can Submit a new open access paper.