no code implementations • 21 Aug 2024 • Lizhou Fan, Lingyao Li, Libby Hemphill
The 2022 Mpox outbreak, initially termed "Monkeypox" but subsequently renamed to mitigate associated stigmas and societal concerns, serves as a poignant backdrop to this issue.
no code implementations • 17 Jun 2024 • Shubham Atreja, Joshua Ashkinaze, Lingyao Li, Julia Mendelsohn, Libby Hemphill
Manually annotating data for computational social science tasks can be costly, time-consuming, and emotionally draining.
1 code implementation • 28 Nov 2023 • Wenyue Hua, Lizhou Fan, Lingyao Li, Kai Mei, Jianchao Ji, Yingqiang Ge, Libby Hemphill, Yongfeng Zhang
Can we avoid wars at the crossroads of history?
no code implementations • 12 Sep 2023 • Angela Schöpke-Gonzalez, Siqi Wu, Sagar Kumar, Paul J. Resnick, Libby Hemphill
In designing instructions for annotation tasks to generate training data for these algorithms, researchers often treat the harm concepts that we train algorithms to detect - 'hateful', 'offensive', 'toxic', 'racist', 'sexist', etc.
no code implementations • 10 Aug 2023 • Zihui Ma, Lingyao Li, Libby Hemphill, Gregory B. Baecher, Yubai Yuan
Our study details how the SIR model and topic modeling using social media data can provide decision-makers with a quantitative approach to measure disaster response and support their decision-making processes.
1 code implementation • 26 May 2023 • Lizhou Fan, Sara Lafia, Lingyao Li, Fangyuan Yang, Libby Hemphill
Data users need relevant context and research expertise to effectively search for and identify relevant datasets.
1 code implementation • 20 Apr 2023 • Lingyao Li, Lizhou Fan, Shubham Atreja, Libby Hemphill
To investigate this potential, we used ChatGPT and compared its performance with MTurker annotations for three frequently discussed concepts related to harmful content: Hateful, Offensive, and Toxic (HOT).
no code implementations • 3 Apr 2023 • Lizhou Fan, Lingyao Li, Zihui Ma, Sanggyu Lee, Huizi Yu, Libby Hemphill
Large language models (LLMs) are a class of language models that have demonstrated outstanding performance across a range of natural language processing (NLP) tasks and have become a highly sought-after research area, because of their ability to generate human-like language and their potential to revolutionize science and technology.
no code implementations • 23 May 2022 • Sara Lafia, Lizhou Fan, Libby Hemphill
The pipeline increases recall for literature to review for inclusion in data-related collections of publications and makes it possible to detect informal data references at scale.
no code implementations • 4 Apr 2022 • Siva K. Balasubramanian, Mustafa Bilgic, Aron Culotta, Libby Hemphill, Anita Nikolich, Matthew A. Shapiro
The Internet Research Agency (IRA) influences online political conversations in the United States, exacerbating existing partisan divides and sowing discord.
no code implementations • 10 Mar 2022 • Lizhou Fan, Sara Lafia, David Bleckley, Elizabeth Moss, Andrea Thomer, Libby Hemphill
The librarian-in-the-loop paradigm is centered in the data work performed by ICPSR librarians, supporting broader efforts to build a more comprehensive bibliography of data-related literature that reflects the scholarly communities of research data users.
no code implementations • 30 Apr 2021 • Sara Lafia, Andrea Thomer, David Bleckley, Dharma Akmon, Libby Hemphill
This paper contributes: 1) a schema of data curation activities; 2) a computational model for identifying curation actions in work log descriptions; and 3) an analysis of frequent data curation activities at ICPSR over time.
no code implementations • 17 Sep 2019 • Libby Hemphill, Angela M. Schöpke-Gonzalez
Understanding how political attention is divided and over what subjects is crucial for research on areas such as agenda setting, framing, and political rhetoric.
no code implementations • ACL 2019 • David Jurgens, Eshwar Chandrasekharan, Libby Hemphill
Online abusive behavior affects millions and the NLP community has attempted to mitigate this problem by developing technologies to detect abuse.
2 code implementations • 31 Jan 2019 • Jane Im, Eshwar Chandrasekharan, Jackson Sargent, Paige Lighthammer, Taylor Denby, Ankit Bhargava, Libby Hemphill, David Jurgens, Eric Gilbert
In this work, we: 1) develop machine learning models that predict whether a Twitter account is a Russian troll within a set of 170K control accounts; and, 2) demonstrate that it is possible to use this model to find active accounts on Twitter still likely acting on behalf of the Russian state.
Social and Information Networks Computers and Society
1 code implementation • 18 Apr 2018 • Ping Liu, Joshua Guberman, Libby Hemphill, Aron Culotta
Online antisocial behavior, such as cyberbullying, harassment, and trolling, is a widespread problem that threatens free discussion and has negative physical and mental health consequences for victims and communities.