Search Results for author: Sagar Kumar

Found 2 papers, 1 papers with code

How We Define Harm Impacts Data Annotations: Explaining How Annotators Distinguish Hateful, Offensive, and Toxic Comments

no code implementations12 Sep 2023 Angela Schöpke-Gonzalez, Siqi Wu, Sagar Kumar, Paul J. Resnick, Libby Hemphill

In designing instructions for annotation tasks to generate training data for these algorithms, researchers often treat the harm concepts that we train algorithms to detect - 'hateful', 'offensive', 'toxic', 'racist', 'sexist', etc.

Do LLMs Understand Social Knowledge? Evaluating the Sociability of Large Language Models with SocKET Benchmark

1 code implementation24 May 2023 MinJe Choi, Jiaxin Pei, Sagar Kumar, Chang Shu, David Jurgens

Large language models (LLMs) have been shown to perform well at a variety of syntactic, discourse, and reasoning tasks.

Cannot find the paper you are looking for? You can Submit a new open access paper.