no code implementations • NAACL (SocialNLP) 2021 • Yufei Tian, Tuhin Chakrabarty, Fred Morstatter, Nanyun Peng
Discrepancies exist among different cultures or languages.
no code implementations • 14 Dec 2024 • Daniel M. Benjamin, Fred Morstatter, Ali E. Abbas, Andres Abeliuk, Pavel Atanasov, Stephen Bennett, Andreas Beger, Saurabh Birari, David V. Budescu, Michele Catasta, Emilio Ferrara, Lucas Haravitch, Mark Himmelstein, KSM Tozammel Hossain, Yuzhong Huang, Woojeong Jin, Regina Joseph, Jure Leskovec, Akira Matsui, Mehrnoosh Mirtaheri, Xiang Ren, Gleb Satyukov, Rajiv Sethi, Amandeep Singh, Rok Sosic, Mark Steyvers, Pedro A Szekely, Michael D. Ward, Aram Galstyan
To improve crowdsourced forecasting accuracy, we developed SAGE, a hybrid forecasting system that combines human and machine generated forecasts.
no code implementations • 28 Oct 2024 • Siyi Guo, Myrl G. Marmarelis, Fred Morstatter, Kristina Lerman
Quantifying the effect of textual interventions in social systems, such as reducing anger in social media posts to see its impact on engagement, poses significant challenges.
no code implementations • 8 Jul 2024 • Harsh Sakhrani, Naseela Pervez, Anirudh Ravi Kumar, Fred Morstatter, Alexandra Graddy Reed, Andrea Belz
It is desirable to coarsely classify short scientific texts, such as grant or publication abstracts, for strategic insight or research portfolio management.
no code implementations • 4 Jul 2024 • Yuzhong Huang, Chen Liu, Ji Hou, Ke Huo, Shiyu Dong, Fred Morstatter
We present UniPlane, a novel method that unifies plane detection and reconstruction from posed monocular videos.
no code implementations • 14 Jun 2024 • Yuzhong Huang, Zhong Li, Zhang Chen, Zhiyuan Ren, Guosheng Lin, Fred Morstatter, Yi Xu
This process is achieved through the distillation of pretrained large-scale text-to-image diffusion models.
no code implementations • 23 May 2024 • Rebecca Dorn, Lee Kezar, Fred Morstatter, Kristina Lerman
We systematically evaluate the performance of five off-the-shelf language models in assessing the harm of these texts and explore the effectiveness of chain-of-thought prompting to teach large language models (LLMs) to leverage author identity context.
1 code implementation • 17 Apr 2024 • James Y. Huang, Wenxuan Zhou, Fei Wang, Fred Morstatter, Sheng Zhang, Hoifung Poon, Muhao Chen
Despite the strong capabilities of Large Language Models (LLMs) to acquire knowledge from their training corpora, the memorization of sensitive information in the corpora such as copyrighted, harmful, and private content has led to ethical and legal concerns.
no code implementations • 30 Mar 2024 • Zhivar Sourati, Meltem Ozcan, Colin McDaniel, Alireza Ziabari, Nuan Wen, Ala Tak, Fred Morstatter, Morteza Dehghani
However, with the increasing adoption of Large Language Models (LLMs) as writing assistants in everyday writing, a critical question emerges: are authors' linguistic patterns still predictive of their personal traits when LLMs are involved in the writing process?
no code implementations • 22 Mar 2024 • Bahareh Harandizadeh, Abel Salinas, Fred Morstatter
This paper explores the pressing issue of risk assessment in Large Language Models (LLMs) as they become increasingly prevalent in various applications.
1 code implementation • 6 Mar 2024 • Abhishek Anand, Negar Mokhberian, Prathyusha Naresh Kumar, Anweasha Saha, Zihao He, Ashwin Rao, Fred Morstatter, Kristina Lerman
Researchers have raised awareness about the harms of aggregating labels especially in subjective tasks that naturally contain disagreements among human annotators.
no code implementations • 16 Feb 2024 • Nikolos Gurney, Fred Morstatter, David V. Pynadath, Adam Russell, Gleb Satyukov
We explore the use of aggregative crowdsourced forecasting (ACF) as a mechanism to help operationalize ``collective intelligence'' of human-machine teams for coordinated actions.
1 code implementation • 5 Feb 2024 • Huy Nghiem, Umang Gupta, Fred Morstatter
The propagation of offensive content through social media channels has garnered attention of the research community.
no code implementations • 22 Jan 2024 • Kian Ahrabian, Zhivar Sourati, Kexuan Sun, Jiarui Zhang, Yifan Jiang, Fred Morstatter, Jay Pujara
While large language models (LLMs) are still being adopted to new domains and utilized in novel applications, we are experiencing an influx of the new generation of foundation models, namely multi-modal large language models (MLLMs).
1 code implementation • 8 Jan 2024 • Abel Salinas, Fred Morstatter
Large Language Models (LLMs) are regularly being used to label data across many domains and for myriad tasks.
1 code implementation • 16 Nov 2023 • Negar Mokhberian, Myrl G. Marmarelis, Frederic R. Hopp, Valerio Basile, Fred Morstatter, Kristina Lerman
Previous studies have shed light on the pitfalls of label aggregation and have introduced a handful of practical approaches to tackle this issue.
no code implementations • 13 Oct 2023 • Abel Salinas, Louis Penafiel, Robert McCormack, Fred Morstatter
Large language models (LLMs) have garnered significant attention for their remarkable performance in a continuously expanding set of natural language processing tasks.
1 code implementation • 3 Aug 2023 • Abel Salinas, Parth Vipul Shah, Yuzhong Huang, Robert McCormack, Fred Morstatter
Our study highlights the importance of measuring the bias of LLMs in downstream applications to understand the potential for harm and inequitable outcomes.
no code implementations • 15 Jun 2023 • Myrl G. Marmarelis, Greg Ver Steeg, Aram Galstyan, Fred Morstatter
We present a simple approach to partial identification using existing causal sensitivity models and show empirically that Caus-Modens gives tighter outcome intervals, as measured by the necessary interval size to achieve sufficient coverage.
1 code implementation • 4 Jun 2023 • Omar Shaikh, Caleb Ziems, William Held, Aryan J. Pariani, Fred Morstatter, Diyi Yang
Prior work uses simple reference games to test models of pragmatic reasoning, often with unidentified speakers and listeners.
1 code implementation • 20 May 2023 • Darshan Deshpande, Zhivar Sourati, Filip Ilievski, Fred Morstatter
Automatic assessment of the quality of arguments has been recognized as a challenging task with significant implications for misinformation and targeted speech.
1 code implementation • 17 May 2023 • Dong-Ho Lee, Kian Ahrabian, Woojeong Jin, Fred Morstatter, Jay Pujara
This shows that prior semantic knowledge is unnecessary; instead, LLMs can leverage the existing patterns in the context to achieve such performance.
no code implementations • 13 Oct 2022 • Negar Mokhberian, Frederic R. Hopp, Bahareh Harandizadeh, Fred Morstatter, Kristina Lerman
Morality classification relies on human annotators to label the moral expressions in text, which provides training data to achieve state-of-the-art performance.
1 code implementation • NAACL 2022 • Ninareh Mehrabi, Ahmad Beirami, Fred Morstatter, Aram Galstyan
Existing work to generate such attacks is either based on human-generated attacks which is costly and not scalable or, in case of automatic attacks, the attack vector does not conform to human-like language, which can be detected using a language model loss.
no code implementations • 4 Dec 2021 • Huy Nghiem, Fred Morstatter
We demonstrate that we are able to identify hate speech that is systematically missed by established hate speech detectors.
1 code implementation • 22 Nov 2021 • Bahareh Harandizadeh, J. Hunter Priniski, Fred Morstatter
By illuminating latent structures in a corpus of text, topic models are an essential tool for categorizing, summarizing, and exploring large collections of documents.
no code implementations • 10 Sep 2021 • Dong-Ho Lee, Ravi Kiran Selvam, Sheikh Muhammad Sarwar, Bill Yuchen Lin, Fred Morstatter, Jay Pujara, Elizabeth Boschee, James Allan, Xiang Ren
Deep neural models for named entity recognition (NER) have shown impressive results in overcoming label scarcity and generalizing to unseen entities by leveraging distant supervision and auxiliary information such as explanations.
Low Resource Named Entity Recognition named-entity-recognition +2
1 code implementation • NAACL (TrustNLP) 2022 • Ninareh Mehrabi, Umang Gupta, Fred Morstatter, Greg Ver Steeg, Aram Galstyan
The widespread use of Artificial Intelligence (AI) in consequential domains, such as healthcare and parole decision-making systems, has drawn intense scrutiny on the fairness of these methods.
no code implementations • 11 Aug 2021 • Zaina Shaik, Filip Ilievski, Fred Morstatter
Through this analysis, we discovered that there is an overrepresentation of white individuals and those with citizenship in Europe and North America; the rest of the groups are generally underrepresented.
no code implementations • EMNLP 2021 • Ninareh Mehrabi, Pei Zhou, Fred Morstatter, Jay Pujara, Xiang Ren, Aram Galstyan
In addition, we analyze two downstream models that use ConceptNet as a source for commonsense knowledge and find the existence of biases in those models as well.
no code implementations • 6 Feb 2021 • Rajiv Sethi, Julie Seager, Emily Cai, Daniel M. Benjamin, Fred Morstatter
We examine probabilistic forecasts for battleground states in the 2020 US presidential election, using daily data from two sources over seven months: a model published by The Economist, and prices from the PredictIt exchange.
1 code implementation • 16 Dec 2020 • Ninareh Mehrabi, Muhammad Naveed, Fred Morstatter, Aram Galstyan
Algorithmic fairness has attracted significant attention in recent years, with many quantitative measures suggested for characterizing the fairness of different machine learning algorithms.
no code implementations • AKBC 2021 • Mehrnoosh Mirtaheri, Mohammad Rostami, Xiang Ren, Fred Morstatter, Aram Galstyan
Most real-world knowledge graphs are characterized by a long-tail relation frequency distribution where a significant fraction of relations occurs only a handful of times.
no code implementations • 4 Sep 2020 • Akira Matsui, Emilio Ferrara, Fred Morstatter, Andres Abeliuk, Aram Galstyan
In this study, we propose the use of a computational framework to identify clusters of underperforming workers using clickstream trajectories.
1 code implementation • 14 May 2020 • Ninareh Mehrabi, Yuzhong Huang, Fred Morstatter
We formalize our definition of fairness, and motivate it with its appropriate contexts.
no code implementations • ACL 2021 • Woojeong Jin, Rahul Khanna, Suji Kim, Dong-Ho Lee, Fred Morstatter, Aram Galstyan, Xiang Ren
In this work, we aim to formulate a task, construct a dataset, and provide benchmarks for developing methods for event forecasting with large volumes of unstructured text data.
1 code implementation • 10 Apr 2020 • Yufei Tian, Tuhin Chakrabarty, Fred Morstatter, Nanyun Peng
Perspective differences exist among different cultures or languages.
1 code implementation • 4 Apr 2020 • Caleb Ziems, Ymir Vigfusson, Fred Morstatter
Cyberbullying is a pervasive problem in online communities.
1 code implementation • 24 Oct 2019 • Ninareh Mehrabi, Thamme Gowda, Fred Morstatter, Nanyun Peng, Aram Galstyan
We study the bias in several state-of-the-art named entity recognition (NER) models---specifically, a difference in the ability to recognize male and female names as PERSON entity types.
2 code implementations • 23 Aug 2019 • Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, Aram Galstyan
With the commercialization of these systems, researchers are becoming aware of the biases that these applications can contain and have attempted to address them.
1 code implementation • 4 Feb 2019 • Mehrnoosh Mirtaheri, Sami Abu-El-Haija, Fred Morstatter, Greg Ver Steeg, Aram Galstyan
Because of the speed and relative anonymity offered by social platforms such as Twitter and Telegram, social media has become a preferred platform for scammers who wish to spread false hype about the cryptocurrency they are trying to pump.
no code implementations • 14 Sep 2017 • Fred Morstatter, Kai Shu, Suhang Wang, Huan Liu
We apply our solution to sentiment analysis, a task that can benefit from the emoji calibration technique we use in this work.
no code implementations • 17 Aug 2016 • Liang Wu, Fred Morstatter, Huan Liu
To this end, we propose to build the first sentiment dictionary of slang words to aid sentiment analysis of social media content.
2 code implementations • 29 Jan 2016 • Jundong Li, Kewei Cheng, Suhang Wang, Fred Morstatter, Robert P. Trevino, Jiliang Tang, Huan Liu
To facilitate and promote the research in this community, we also present an open-source feature selection repository that consists of most of the popular feature selection algorithms (\url{http://featureselection. asu. edu/}).
no code implementations • WS 2014 • Fred Morstatter, Nichola Lubold, Heather Pon-Barry, Jürgen Pfeffer, Huan Liu
These agencies look for tweets from within the region affected by the crisis to get the latest updates of the status of the affected region.