1 code implementation • 6 Mar 2024 • Abhishek Anand, Negar Mokhberian, Prathyusha Naresh Kumar, Anweasha Saha, Zihao He, Ashwin Rao, Fred Morstatter, Kristina Lerman
Researchers have raised awareness about the harms of aggregating labels especially in subjective tasks that naturally contain disagreements among human annotators.
1 code implementation • 2 Feb 2024 • Zihao He, Ashwin Rao, Siyi Guo, Negar Mokhberian, Kristina Lerman
Recent advances in NLP have improved our ability to understand the nuanced worldviews of online communities.
no code implementations • 16 Nov 2023 • Negar Mokhberian, Myrl G. Marmarelis, Frederic R. Hopp, Valerio Basile, Fred Morstatter, Kristina Lerman
Previous studies have shed light on the pitfalls of label aggregation and have introduced a handful of practical approaches to tackle this issue.
no code implementations • 4 Apr 2023 • Siyi Guo, Negar Mokhberian, Kristina Lerman
Language models can be trained to recognize the moral sentiment of text, creating new opportunities to study the role of morality in human life.
no code implementations • 13 Oct 2022 • Negar Mokhberian, Frederic R. Hopp, Bahareh Harandizadeh, Fred Morstatter, Kristina Lerman
Morality classification relies on human annotators to label the moral expressions in text, which provides training data to achieve state-of-the-art performance.
2 code implementations • WASSA (ACL) 2022 • Zihao He, Negar Mokhberian, Kristina Lerman
Stance detection infers a text author's attitude towards a target.
1 code implementation • Findings (EMNLP) 2021 • Zihao He, Negar Mokhberian, Antonio Camara, Andres Abeliuk, Kristina Lerman
We apply our method to a dataset of news articles about the COVID-19 pandemic.
no code implementations • 6 Apr 2020 • Nazgol Tavabi, Andrés Abeliuk, Negar Mokhberian, Jeremy Abramson, Kristina Lerman
As we show in this paper, the process of filtering reduces the predictability of cyber-attacks.