Search Results for author: Libby Hemphill

Found 14 papers, 5 papers with code

How We Define Harm Impacts Data Annotations: Explaining How Annotators Distinguish Hateful, Offensive, and Toxic Comments

no code implementations12 Sep 2023 Angela Schöpke-Gonzalez, Siqi Wu, Sagar Kumar, Paul J. Resnick, Libby Hemphill

In designing instructions for annotation tasks to generate training data for these algorithms, researchers often treat the harm concepts that we train algorithms to detect - 'hateful', 'offensive', 'toxic', 'racist', 'sexist', etc.

Investigating disaster response through social media data and the Susceptible-Infected-Recovered (SIR) model: A case study of 2020 Western U.S. wildfire season

no code implementations10 Aug 2023 Zihui Ma, Lingyao Li, Libby Hemphill, Gregory B. Baecher, Yubai Yuan

Our study details how the SIR model and topic modeling using social media data can provide decision-makers with a quantitative approach to measure disaster response and support their decision-making processes.

Decision Making Disaster Response

DataChat: Prototyping a Conversational Agent for Dataset Search and Visualization

1 code implementation26 May 2023 Lizhou Fan, Sara Lafia, Lingyao Li, Fangyuan Yang, Libby Hemphill

Data users need relevant context and research expertise to effectively search for and identify relevant datasets.

Chatbot Language Modelling +1

"HOT" ChatGPT: The promise of ChatGPT in detecting and discriminating hateful, offensive, and toxic comments on social media

1 code implementation20 Apr 2023 Lingyao Li, Lizhou Fan, Shubham Atreja, Libby Hemphill

To investigate this potential, we used ChatGPT and compared its performance with MTurker annotations for three frequently discussed concepts related to harmful content: Hateful, Offensive, and Toxic (HOT).

A Bibliometric Review of Large Language Models Research from 2017 to 2023

no code implementations3 Apr 2023 Lizhou Fan, Lingyao Li, Zihui Ma, Sanggyu Lee, Huizi Yu, Libby Hemphill

Large language models (LLMs) are a class of language models that have demonstrated outstanding performance across a range of natural language processing (NLP) tasks and have become a highly sought-after research area, because of their ability to generate human-like language and their potential to revolutionize science and technology.

Navigate

A Natural Language Processing Pipeline for Detecting Informal Data References in Academic Literature

no code implementations23 May 2022 Sara Lafia, Lizhou Fan, Libby Hemphill

The pipeline increases recall for literature to review for inclusion in data-related collections of publications and makes it possible to detect informal data references at scale.

named-entity-recognition Named Entity Recognition +1

Leaders or Followers? A Temporal Analysis of Tweets from IRA Trolls

no code implementations4 Apr 2022 Siva K. Balasubramanian, Mustafa Bilgic, Aron Culotta, Libby Hemphill, Anita Nikolich, Matthew A. Shapiro

The Internet Research Agency (IRA) influences online political conversations in the United States, exacerbating existing partisan divides and sowing discord.

Librarian-in-the-Loop: A Natural Language Processing Paradigm for Detecting Informal Mentions of Research Data in Academic Literature

no code implementations10 Mar 2022 Lizhou Fan, Sara Lafia, David Bleckley, Elizabeth Moss, Andrea Thomer, Libby Hemphill

The librarian-in-the-loop paradigm is centered in the data work performed by ICPSR librarians, supporting broader efforts to build a more comprehensive bibliography of data-related literature that reflects the scholarly communities of research data users.

Leveraging Machine Learning to Detect Data Curation Activities

no code implementations30 Apr 2021 Sara Lafia, Andrea Thomer, David Bleckley, Dharma Akmon, Libby Hemphill

This paper contributes: 1) a schema of data curation activities; 2) a computational model for identifying curation actions in work log descriptions; and 3) an analysis of frequent data curation activities at ICPSR over time.

BIG-bench Machine Learning Decision Making +1

Two Computational Models for Analyzing Political Attention in Social Media

no code implementations17 Sep 2019 Libby Hemphill, Angela M. Schöpke-Gonzalez

Understanding how political attention is divided and over what subjects is crucial for research on areas such as agenda setting, framing, and political rhetoric.

Vocal Bursts Valence Prediction

A Just and Comprehensive Strategy for Using NLP to Address Online Abuse

no code implementations ACL 2019 David Jurgens, Eshwar Chandrasekharan, Libby Hemphill

Online abusive behavior affects millions and the NLP community has attempted to mitigate this problem by developing technologies to detect abuse.

Position

Still out there: Modeling and Identifying Russian Troll Accounts on Twitter

2 code implementations31 Jan 2019 Jane Im, Eshwar Chandrasekharan, Jackson Sargent, Paige Lighthammer, Taylor Denby, Ankit Bhargava, Libby Hemphill, David Jurgens, Eric Gilbert

In this work, we: 1) develop machine learning models that predict whether a Twitter account is a Russian troll within a set of 170K control accounts; and, 2) demonstrate that it is possible to use this model to find active accounts on Twitter still likely acting on behalf of the Russian state.

Social and Information Networks Computers and Society

Forecasting the presence and intensity of hostility on Instagram using linguistic and social features

1 code implementation18 Apr 2018 Ping Liu, Joshua Guberman, Libby Hemphill, Aron Culotta

Online antisocial behavior, such as cyberbullying, harassment, and trolling, is a widespread problem that threatens free discussion and has negative physical and mental health consequences for victims and communities.

Task 2

Cannot find the paper you are looking for? You can Submit a new open access paper.