Search Results for author: Maarten Sap

Found 65 papers, 25 papers with code

Story Cloze Task: UW NLP System

no code implementations WS 2017 Roy Schwartz, Maarten Sap, Ioannis Konstas, Leila Zilles, Yejin Choi, Noah A. Smith

This paper describes University of Washington NLP{'}s submission for the Linking Models of Lexical, Sentential and Discourse-level Semantics (LSDSem 2017) shared task{---}the Story Cloze Task.

Language Modelling

DLATK: Differential Language Analysis ToolKit

no code implementations EMNLP 2017 H. Andrew Schwartz, Salvatore Giorgi, Maarten Sap, Patrick Crutchley, Lyle Ungar, Johannes Eichstaedt

We present Differential Language Analysis Toolkit (DLATK), an open-source python package and command-line tool developed for conducting social-scientific language analyses.

General Classification

Modeling Naive Psychology of Characters in Simple Commonsense Stories

no code implementations ACL 2018 Hannah Rashkin, Antoine Bosselut, Maarten Sap, Kevin Knight, Yejin Choi

Understanding a narrative requires reading between the lines and reasoning about the unspoken but obvious implications about events and people's mental states - a capability that is trivial for humans but remarkably hard for machines.

Emotion Classification

Event2Mind: Commonsense Inference on Events, Intents, and Reactions

no code implementations ACL 2018 Hannah Rashkin, Maarten Sap, Emily Allaway, Noah A. Smith, Yejin Choi

We investigate a new commonsense inference task: given an event described in a short free-form text ("X drinks coffee in the morning"), a system reasons about the likely intents ("X wants to stay awake") and reactions ("X feels alert") of the event's participants.

Common Sense Reasoning

ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning

2 code implementations31 Oct 2018 Maarten Sap, Ronan LeBras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A. Smith, Yejin Choi

We present ATOMIC, an atlas of everyday commonsense reasoning, organized through 877k textual descriptions of inferential knowledge.

Relation

COMET: Commonsense Transformers for Automatic Knowledge Graph Construction

1 code implementation ACL 2019 Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, Yejin Choi

We present the first comprehensive study on automatic knowledge base construction for two prevalent commonsense knowledge graphs: ATOMIC (Sap et al., 2019) and ConceptNet (Speer et al., 2017).

graph construction Knowledge Graphs

The Risk of Racial Bias in Hate Speech Detection

no code implementations ACL 2019 Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, Noah A. Smith

We investigate how annotators{'} insensitivity to differences in dialect can lead to racial bias in automatic hate speech detection models, potentially amplifying harm against minority populations.

Hate Speech Detection

Social Bias Frames: Reasoning about Social and Power Implications of Language

no code implementations ACL 2020 Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A. Smith, Yejin Choi

We introduce Social Bias Frames, a new conceptual formalism that aims to model the pragmatic frames in which people project social biases and stereotypes onto others.

Commonsense Reasoning for Natural Language Processing

no code implementations ACL 2020 Maarten Sap, Vered Shwartz, Antoine Bosselut, Yejin Choi, Dan Roth

We organize this tutorial to provide researchers with the critical foundations and recent advances in commonsense representation and reasoning, in the hopes of casting a brighter light on this promising area of future research.

Navigate

Exploring the Effect of Author and Reader Identity in Online Story Writing: the STORIESINTHEWILD Corpus.

no code implementations WS 2020 Tal August, Maarten Sap, Elizabeth Clark, Katharina Reinecke, Noah A. Smith

We analyze the effect of author and reader characteristics and story writing setup on the quality of stories in a short storytelling task.

RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models

2 code implementations Findings of the Association for Computational Linguistics 2020 Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, Noah A. Smith

We investigate the extent to which pretrained LMs can be prompted to generate toxic language, and the effectiveness of controllable text generation algorithms at preventing such toxic degeneration.

Sentence Text Generation

PowerTransformer: Unsupervised Controllable Revision for Biased Language Correction

no code implementations EMNLP 2020 Xinyao Ma, Maarten Sap, Hannah Rashkin, Yejin Choi

Unconscious biases continue to be prevalent in modern text and media, calling for algorithms that can assist writers with bias correction.

Social Chemistry 101: Learning to Reason about Social and Moral Norms

2 code implementations EMNLP 2020 Maxwell Forbes, Jena D. Hwang, Vered Shwartz, Maarten Sap, Yejin Choi

We present Social Chemistry, a new conceptual formalism to study people's everyday social norms and moral judgments over a rich spectrum of real life situations described in natural language.

Attribute

Challenges in Automated Debiasing for Toxic Language Detection

2 code implementations EACL 2021 Xuhui Zhou, Maarten Sap, Swabha Swayamdipta, Noah A. Smith, Yejin Choi

Overall, our findings show that debiasing a model trained on biased toxic language data is not as effective as simply relabeling the data to remove existing biases.

Fairness text-classification +1

Just Say No: Analyzing the Stance of Neural Dialogue Generation in Offensive Contexts

1 code implementation EMNLP 2021 Ashutosh Baheti, Maarten Sap, Alan Ritter, Mark Riedl

To better understand the dynamics of contextually offensive language, we investigate the stance of dialogue model responses in offensive Reddit conversations.

Dialogue Generation

Annotators with Attitudes: How Annotator Beliefs And Identities Bias Toxic Language Detection

no code implementations NAACL 2022 Maarten Sap, Swabha Swayamdipta, Laura Vianna, Xuhui Zhou, Yejin Choi, Noah A. Smith

The perceived toxicity of language can vary based on someone's identity and beliefs, but this variation is often ignored when collecting toxic language datasets, resulting in dataset and model biases.

Imagined versus Remembered Stories: Quantifying Differences in Narrative Flow

no code implementations7 Jan 2022 Maarten Sap, Anna Jafarpour, Yejin Choi, Noah A. Smith, James W. Pennebaker, Eric Horvitz

We quantify the differences between autobiographical and imagined stories by introducing sequentiality, a measure of narrative flow of events, drawing probabilistic inferences from a cutting-edge large language model (GPT-3).

Language Modelling Large Language Model +2

ToxiGen: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection

1 code implementation ACL 2022 Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, Ece Kamar

To help mitigate these issues, we create ToxiGen, a new large-scale and machine-generated dataset of 274k toxic and benign statements about 13 minority groups.

Hate Speech Detection Language Modelling

Aligning to Social Norms and Values in Interactive Narratives

no code implementations NAACL 2022 Prithviraj Ammanabrolu, Liwei Jiang, Maarten Sap, Hannaneh Hajishirzi, Yejin Choi

We focus on creating agents that act in alignment with socially beneficial norms and values in interactive narratives or text-based games -- environments wherein an agent perceives and interacts with a world through natural language.

text-based games

ProsocialDialog: A Prosocial Backbone for Conversational Agents

1 code implementation25 May 2022 Hyunwoo Kim, Youngjae Yu, Liwei Jiang, Ximing Lu, Daniel Khashabi, Gunhee Kim, Yejin Choi, Maarten Sap

With this dataset, we introduce a dialogue safety detection module, Canary, capable of generating RoTs given conversational context, and a socially-informed dialogue agent, Prost.

Dialogue Generation Dialogue Safety Prediction +2

Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

3 code implementations9 Jun 2022 Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ambrose Slone, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Madotto, Andrea Santilli, Andreas Stuhlmüller, Andrew Dai, Andrew La, Andrew Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakaş, B. Ryan Roberts, Bao Sheng Loe, Barret Zoph, Bartłomiej Bojanowski, Batuhan Özyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Howald, Bryan Orinion, Cameron Diao, Cameron Dour, Catherine Stinson, Cedrick Argueta, César Ferri Ramírez, Chandan Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison-Burch, Chris Waites, Christian Voigt, Christopher D. Manning, Christopher Potts, Cindy Ramirez, Clara E. Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Dan Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, Daniel Freeman, Daniel Khashabi, Daniel Levy, Daniel Moseguí González, Danielle Perszyk, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Dohan, David Drakard, David Jurgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Dylan Schrader, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Elizabeth Donoway, Ellie Pavlick, Emanuele Rodola, Emma Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan Jerzak, Ethan Kim, Eunice Engefu Manyasi, Evgenii Zheltonozhskii, Fanyue Xia, Fatemeh Siar, Fernando Martínez-Plumed, Francesca Happé, Francois Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard de Melo, Germán Kruszewski, Giambattista Parascandolo, Giorgio Mariani, Gloria Wang, Gonzalo Jaimovitch-López, Gregor Betz, Guy Gur-Ari, Hana Galijasevic, Hannah Kim, Hannah Rashkin, Hannaneh Hajishirzi, Harsh Mehta, Hayden Bogar, Henry Shevlin, Hinrich Schütze, Hiromu Yakura, Hongming Zhang, Hugh Mee Wong, Ian Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, Jackson Kernion, Jacob Hilton, Jaehoon Lee, Jaime Fernández Fisac, James B. Simon, James Koppel, James Zheng, James Zou, Jan Kocoń, Jana Thompson, Janelle Wingfield, Jared Kaplan, Jarema Radom, Jascha Sohl-Dickstein, Jason Phang, Jason Wei, Jason Yosinski, Jekaterina Novikova, Jelle Bosscher, Jennifer Marsh, Jeremy Kim, Jeroen Taal, Jesse Engel, Jesujoba Alabi, Jiacheng Xu, Jiaming Song, Jillian Tang, Joan Waweru, John Burden, John Miller, John U. Balis, Jonathan Batchelder, Jonathan Berant, Jörg Frohberg, Jos Rozen, Jose Hernandez-Orallo, Joseph Boudeman, Joseph Guerr, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva, Katja Markert, Kaustubh D. Dhole, Kevin Gimpel, Kevin Omondi, Kory Mathewson, Kristen Chiafullo, Ksenia Shkaruta, Kumar Shridhar, Kyle McDonell, Kyle Richardson, Laria Reynolds, Leo Gao, Li Zhang, Liam Dugan, Lianhui Qin, Lidia Contreras-Ochando, Louis-Philippe Morency, Luca Moschella, Lucas Lam, Lucy Noble, Ludwig Schmidt, Luheng He, Luis Oliveros Colón, Luke Metz, Lütfi Kerem Şenel, Maarten Bosma, Maarten Sap, Maartje ter Hoeve, Maheen Farooqi, Manaal Faruqui, Mantas Mazeika, Marco Baturan, Marco Marelli, Marco Maru, Maria Jose Ramírez Quintana, Marie Tolkiehn, Mario Giulianelli, Martha Lewis, Martin Potthast, Matthew L. Leavitt, Matthias Hagen, Mátyás Schubert, Medina Orduna Baitemirova, Melody Arnaud, Melvin McElrath, Michael A. Yee, Michael Cohen, Michael Gu, Michael Ivanitskiy, Michael Starritt, Michael Strube, Michał Swędrowski, Michele Bevilacqua, Michihiro Yasunaga, Mihir Kale, Mike Cain, Mimee Xu, Mirac Suzgun, Mitch Walker, Mo Tiwari, Mohit Bansal, Moin Aminnaseri, Mor Geva, Mozhdeh Gheini, Mukund Varma T, Nanyun Peng, Nathan A. Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nicole Martinez, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nitish Shirish Keskar, Niveditha S. Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter Chang, Peter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr Miłkowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramon Risco, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan LeBras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Ruslan Salakhutdinov, Ryan Chi, Ryan Lee, Ryan Stovall, Ryan Teehan, Rylan Yang, Sahib Singh, Saif M. Mohammad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Samuel R. Bowman, Samuel S. Schoenholz, Sanghyun Han, Sanjeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepideh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, Shubh Pachchigar, Shubham Toshniwal, Shyam Upadhyay, Shyamolima, Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar, Stanislas Dehaene, Stefan Divic, Stefano Ermon, Stella Biderman, Stephanie Lin, Stephen Prasad, Steven T. Piantadosi, Stuart M. Shieber, Summer Misherghi, Svetlana Kiritchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq Ali, Tatsu Hashimoto, Te-Lin Wu, Théo Desbordes, Theodore Rothschild, Thomas Phan, Tianle Wang, Tiberius Nkinyili, Timo Schick, Timofei Kornev, Titus Tunduny, Tobias Gerstenberg, Trenton Chang, Trishala Neeraj, Tushar Khot, Tyler Shultz, Uri Shaham, Vedant Misra, Vera Demberg, Victoria Nyamai, Vikas Raunak, Vinay Ramasesh, Vinay Uday Prabhu, Vishakh Padmakumar, Vivek Srikumar, William Fedus, William Saunders, William Zhang, Wout Vossen, Xiang Ren, Xiaoyu Tong, Xinran Zhao, Xinyi Wu, Xudong Shen, Yadollah Yaghoobzadeh, Yair Lakretz, Yangqiu Song, Yasaman Bahri, Yejin Choi, Yichi Yang, Yiding Hao, Yifu Chen, Yonatan Belinkov, Yu Hou, Yufang Hou, Yuntao Bai, Zachary Seid, Zhuoye Zhao, Zijian Wang, Zijie J. Wang, ZiRui Wang, Ziyi Wu

BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models.

Common Sense Reasoning Math +1

When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment

1 code implementation4 Oct 2022 Zhijing Jin, Sydney Levine, Fernando Gonzalez, Ojasv Kamal, Maarten Sap, Mrinmaya Sachan, Rada Mihalcea, Josh Tenenbaum, Bernhard Schölkopf

Using a state-of-the-art large language model (LLM) as a basis, we propose a novel moral chain of thought (MORALCOT) prompting strategy that combines the strengths of LLMs with theories of moral reasoning developed in cognitive science to predict human moral judgments.

Language Modelling Large Language Model +1

Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs

no code implementations24 Oct 2022 Maarten Sap, Ronan LeBras, Daniel Fried, Yejin Choi

We show that one of today's largest language models (GPT-3; Brown et al., 2020) lacks this kind of social intelligence out-of-the box, using two tasks: SocialIQa (Sap et al., 2019), which measures models' ability to understand intents and reactions of participants of social interactions, and ToMi (Le et al., 2019), which measures whether models can infer mental states and realities of participants of situations.

Navigate Open-Ended Question Answering

Detoxifying Text with MaRCo: Controllable Revision with Experts and Anti-Experts

1 code implementation20 Dec 2022 Skyler Hallinan, Alisa Liu, Yejin Choi, Maarten Sap

Text detoxification has the potential to mitigate the harms of toxicity by rephrasing text to remove offensive meaning, but subtle toxicity remains challenging to tackle.

Towards Countering Essentialism through Social Bias Reasoning

no code implementations28 Mar 2023 Emily Allaway, Nina Taneja, Sarah-Jane Leslie, Maarten Sap

Essentialist beliefs (i. e., believing that members of the same group are fundamentally alike) play a central role in social stereotypes and can lead to harm when left unchallenged.

BiasX: "Thinking Slow" in Toxic Content Moderation with Explanations of Implied Social Biases

no code implementations23 May 2023 Yiming Zhang, Sravani Nanduri, Liwei Jiang, Tongshuang Wu, Maarten Sap

Toxicity annotators and content moderators often default to mental shortcuts when making decisions.

Modeling Empathic Similarity in Personal Narratives

no code implementations23 May 2023 Jocelyn Shen, Maarten Sap, Pedro Colon-Hernandez, Hae Won Park, Cynthia Breazeal

The most meaningful connections between people are often fostered through expression of shared vulnerability and emotional experiences in personal narratives.

Retrieval Semantic Similarity +1

Don't Take This Out of Context! On the Need for Contextual Models and Evaluations for Stylistic Rewriting

no code implementations24 May 2023 Akhila Yerukola, Xuhui Zhou, Elizabeth Clark, Maarten Sap

Most existing stylistic text rewriting methods and evaluation metrics operate on a sentence level, but ignoring the broader context of the text can lead to preferring generic, ambiguous, and incoherent rewrites.

Sentence

Leftover-Lunch: Advantage-based Offline Reinforcement Learning for Language Models

1 code implementation24 May 2023 Ashutosh Baheti, Ximing Lu, Faeze Brahman, Ronan Le Bras, Maarten Sap, Mark Riedl

However, RLHF is an unstable and data-hungry process that continually requires new high-quality LM-generated data for finetuning.

Language Modelling Offline RL +2

From Dogwhistles to Bullhorns: Unveiling Coded Rhetoric with Language Models

no code implementations26 May 2023 Julia Mendelsohn, Ronan Le Bras, Yejin Choi, Maarten Sap

Dogwhistles are coded expressions that simultaneously convey one meaning to a broad audience and a second one, often hateful or provocative, to a narrow in-group; they are deployed to evade both political repercussions and algorithmic content moderation.

Language Modelling Large Language Model +1

NLPositionality: Characterizing Design Biases of Datasets and Models

1 code implementation2 Jun 2023 Sebastin Santy, Jenny T. Liang, Ronan Le Bras, Katharina Reinecke, Maarten Sap

We introduce NLPositionality, a framework for characterizing design biases and quantifying the positionality of NLP datasets and models.

Hate Speech Detection

COBRA Frames: Contextual Reasoning about Effects and Harms of Offensive Statements

no code implementations3 Jun 2023 Xuhui Zhou, Hao Zhu, Akhila Yerukola, Thomas Davidson, Jena D. Hwang, Swabha Swayamdipta, Maarten Sap

To study the contextual dynamics of offensiveness, we train models to generate COBRA explanations, with and without access to the context.

Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties

1 code implementation2 Sep 2023 Taylor Sorensen, Liwei Jiang, Jena Hwang, Sydney Levine, Valentina Pyatkin, Peter West, Nouha Dziri, Ximing Lu, Kavel Rao, Chandra Bhagavatula, Maarten Sap, John Tasioulas, Yejin Choi

To improve AI systems to better reflect value pluralism, the first-order challenge is to explore the extent to which AI systems can model pluralistic human values, rights, and duties as well as their interaction.

Decision Making

SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents

1 code implementation18 Oct 2023 Xuhui Zhou, Hao Zhu, Leena Mathur, Ruohong Zhang, Haofei Yu, Zhengyang Qi, Louis-Philippe Morency, Yonatan Bisk, Daniel Fried, Graham Neubig, Maarten Sap

We present SOTOPIA, an open-ended environment to simulate complex social interactions between artificial agents and evaluate their social intelligence.

FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions

no code implementations24 Oct 2023 Hyunwoo Kim, Melanie Sclar, Xuhui Zhou, Ronan Le Bras, Gunhee Kim, Yejin Choi, Maarten Sap

Theory of mind (ToM) evaluations currently focus on testing models using passive narratives that inherently lack interactivity.

Question Answering

Beyond Denouncing Hate: Strategies for Countering Implied Biases and Stereotypes in Language

no code implementations31 Oct 2023 Jimin Mun, Emily Allaway, Akhila Yerukola, Laura Vianna, Sarah-Jane Leslie, Maarten Sap

Counterspeech, i. e., responses to counteract potential harms of hateful speech, has become an increasingly popular solution to address online hate speech without censorship.

Philosophy

Where Do People Tell Stories Online? Story Detection Across Online Communities

1 code implementation16 Nov 2023 Maria Antoniak, Joel Mire, Maarten Sap, Elliott Ash, Andrew Piper

Story detection in online communities is a challenging task as stories are scattered across communities and interwoven with non-storytelling spans within a single text.

Persuasion Strategies

Riveter: Measuring Power and Social Dynamics Between Entities

1 code implementation15 Dec 2023 Maria Antoniak, Anjalie Field, Jimin Mun, Melanie Walsh, Lauren F. Klein, Maarten Sap

Riveter provides a complete easy-to-use pipeline for analyzing verb connotations associated with entities in text corpora.

Relying on the Unreliable: The Impact of Language Models' Reluctance to Express Uncertainty

no code implementations12 Jan 2024 Kaitlyn Zhou, Jena D. Hwang, Xiang Ren, Maarten Sap

As natural language becomes the default interface for human-AI interaction, there is a critical need for LMs to appropriately communicate uncertainties in downstream applications.

Is this the real life? Is this just fantasy? The Misleading Success of Simulating Social Interactions With LLMs

no code implementations8 Mar 2024 Xuhui Zhou, Zhe Su, Tiwalayo Eisape, Hyunwoo Kim, Maarten Sap

Recent advances in large language models (LLM) have enabled richer social simulations, allowing for the study of various social phenomena with LLM-based agents.

SOTOPIA-$π$: Interactive Learning of Socially Intelligent Language Agents

1 code implementation13 Mar 2024 Ruiyi Wang, Haofei Yu, Wenxin Zhang, Zhengyang Qi, Maarten Sap, Graham Neubig, Yonatan Bisk, Hao Zhu

Motivated by this gap, we propose an interactive learning method, SOTOPIA-$\pi$, improving the social intelligence of language agents.

Language Modelling Large Language Model

Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits

no code implementations21 Mar 2024 Jimin Mun, Liwei Jiang, Jenny Liang, Inyoung Cheong, Nicole DeCario, Yejin Choi, Tadayoshi Kohno, Maarten Sap

As a first step towards democratic governance and risk assessment of AI, we introduce Particip-AI, a framework to gather current and future AI use cases and their harms and benefits from non-expert public.

Crowdsourcing Beyond Annotation: Case Studies in Benchmark Data Collection

no code implementations EMNLP (ACL) 2021 Alane Suhr, Clara Vania, Nikita Nangia, Maarten Sap, Mark Yatskar, Samuel R. Bowman, Yoav Artzi

Even though it is such a fundamental tool in NLP, crowdsourcing use is largely guided by common practices and the personal experience of researchers.

Uncovering Surprising Event Boundaries in Narratives

no code implementations NAACL (WNU) 2022 Zhilin Wang, Anna Jafarpour, Maarten Sap

It is important to define meaningful and interpretable automatic evaluation metrics for open-domain dialog research.

Open-Domain Dialog Text Generation

Misinfo Reaction Frames: Reasoning about Readers’ Reactions to News Headlines

1 code implementation ACL 2022 Saadia Gabriel, Skyler Hallinan, Maarten Sap, Pemi Nguyen, Franziska Roesner, Eunsol Choi, Yejin Choi

Even to a simple and short news headline, readers react in a multitude of ways: cognitively (e. g. inferring the writer’s intent), emotionally (e. g. feeling distrust), and behaviorally (e. g. sharing the news with their friends).

Misinformation

Cannot find the paper you are looking for? You can Submit a new open access paper.