Search Results for author: Yejin Choi

Found 139 papers, 50 papers with code

Information-Theoretic Measures of Dataset Difficulty

no code implementations16 Oct 2021 Kawin Ethayarajh, Yejin Choi, Swabha Swayamdipta

Estimating the difficulty of a dataset typically involves comparing state-of-the-art models to humans; the bigger the performance gap, the harder the dataset is said to be.

Generated Knowledge Prompting for Commonsense Reasoning

no code implementations15 Oct 2021 Jiacheng Liu, Alisa Liu, Ximing Lu, Sean Welleck, Peter West, Ronan Le Bras, Yejin Choi, Hannaneh Hajishirzi

Despite their ability to capture large amount of knowledge during pretraining, large-scale language models often benefit from incorporating external knowledge bases, especially on commonsense reasoning tasks.

Language Modelling

Delphi: Towards Machine Ethics and Norms

no code implementations14 Oct 2021 Liwei Jiang, Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Maxwell Forbes, Jon Borchardt, Jenny Liang, Oren Etzioni, Maarten Sap, Yejin Choi

We identify four underlying challenges towards machine ethics and norms: (1) an understanding of moral precepts and social norms; (2) the ability to perceive real-world situations visually or by reading natural language descriptions; (3) commonsense reasoning to anticipate the outcome of alternative actions in different contexts; (4) most importantly, the ability to make ethical judgments given the interplay between competing values and their grounding in different contexts (e. g., the right to freedom of expression vs. preventing the spread of fake news).

Symbolic Brittleness in Sequence Models: on Systematic Generalization in Symbolic Mathematics

no code implementations28 Sep 2021 Sean Welleck, Peter West, Jize Cao, Yejin Choi

Neural sequence models trained with maximum likelihood estimation have led to breakthroughs in many tasks, where success is defined by the gap between training and test performance.

Systematic Generalization

Conversational Multi-Hop Reasoning with Neural Commonsense Knowledge and Symbolic Logic Rules

no code implementations17 Sep 2021 Forough Arabshahi, Jennifer Lee, Antoine Bosselut, Yejin Choi, Tom Mitchell

Our reasoner uses a state-of-the-art transformer-based generative commonsense knowledge base (KB) as its source of background knowledge for reasoning.

Common Sense Reasoning Question Generation

Scarecrow: A Framework for Scrutinizing Machine Text

no code implementations2 Jul 2021 Yao Dou, Maxwell Forbes, Rik Koncel-Kedziorski, Noah A. Smith, Yejin Choi

These findings demonstrate the value of Scarecrow annotations in the assessment of current and future text generation systems.

Text Generation

Divergence Frontiers for Generative Models: Sample Complexity, Quantization Level, and Frontier Integral

no code implementations15 Jun 2021 Lang Liu, Krishna Pillutla, Sean Welleck, Sewoong Oh, Yejin Choi, Zaid Harchaoui

The spectacular success of deep generative models calls for quantitative tools to measure their statistical performance.

Quantization

TIMEDIAL: Temporal Commonsense Reasoning in Dialog

1 code implementation ACL 2021 Lianhui Qin, Aditya Gupta, Shyam Upadhyay, Luheng He, Yejin Choi, Manaal Faruqui

In this paper, we present the first study to investigate pre-trained LMs for their temporal reasoning capabilities in dialogs by introducing a new task and a crowd-sourced English challenge set, TIMEDIAL.

MERLOT: Multimodal Neural Script Knowledge Models

1 code implementation4 Jun 2021 Rowan Zellers, Ximing Lu, Jack Hessel, Youngjae Yu, Jae Sung Park, Jize Cao, Ali Farhadi, Yejin Choi

As humans, we understand events in the visual world contextually, performing multimodal reasoning across time to make inferences about the past, present, and future.

Visual Commonsense Reasoning

PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D World

no code implementations ACL 2021 Rowan Zellers, Ari Holtzman, Matthew Peters, Roozbeh Mottaghi, Aniruddha Kembhavi, Ali Farhadi, Yejin Choi

We propose PIGLeT: a model that learns physical commonsense knowledge through interaction, and then uses this knowledge to ground language.

Language Modelling

``I'm Not Mad'': Commonsense Implications of Negation and Contradiction

no code implementations NAACL 2021 Liwei Jiang, Antoine Bosselut, Chandra Bhagavatula, Yejin Choi

In this paper, we present the first comprehensive study focusing on commonsense implications of negated statements and contradictions.

Natural Language Inference

Misinfo Reaction Frames: Reasoning about Readers' Reactions to News Headlines

no code implementations18 Apr 2021 Saadia Gabriel, Skyler Hallinan, Maarten Sap, Pemi Nguyen, Franziska Roesner, Eunsol Choi, Yejin Choi

We propose Misinfo Reaction Frames, a pragmatic formalism for modeling how readers might react to a news headline cognitively, emotionally, and behaviorally.

Fact Checking Language Modelling +1

CLIPScore: A Reference-free Evaluation Metric for Image Captioning

no code implementations18 Apr 2021 Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, Yejin Choi

Image captioning has conventionally relied on reference-based automatic evaluations, where machine captions are compared against captions written by humans.

Image Captioning

Surface Form Competition: Why the Highest Probability Answer Isn't Always Right

1 code implementation16 Apr 2021 Ari Holtzman, Peter West, Vered Shwartz, Yejin Choi, Luke Zettlemoyer

Large language models have shown promising results in zero-shot settings (Brown et al., 2020; Radford et al., 2019).

proScript: Partially Ordered Scripts Generation via Pre-trained Language Models

no code implementations16 Apr 2021 Keisuke Sakaguchi, Chandra Bhagavatula, Ronan Le Bras, Niket Tandon, Peter Clark, Yejin Choi

Scripts - standardized event sequences describing typical everyday activities - have been shown to help understand narratives by providing expectations, resolving ambiguity, and filling in unstated information.

Text Generation

"I'm Not Mad": Commonsense Implications of Negation and Contradiction

no code implementations13 Apr 2021 Liwei Jiang, Antoine Bosselut, Chandra Bhagavatula, Yejin Choi

In this paper, we present the first comprehensive study focusing on commonsense implications of negated statements and contradictions.

Natural Language Inference

UNICORN on RAINBOW: A Universal Commonsense Reasoning Model on a New Multitask Benchmark

1 code implementation24 Mar 2021 Nicholas Lourie, Ronan Le Bras, Chandra Bhagavatula, Yejin Choi

First, we propose a new multitask benchmark, RAINBOW, to promote research on commonsense models that generalize well over multiple tasks and datasets.

Knowledge Graphs Transfer Learning

NaturalProofs: Mathematical Theorem Proving in Natural Language

1 code implementation24 Mar 2021 Sean Welleck, Jiacheng Liu, Ronan Le Bras, Hannaneh Hajishirzi, Yejin Choi, Kyunghyun Cho

Understanding and creating mathematics using natural mathematical language - the mixture of symbolic and natural language used by humans - is a challenging and important problem for driving progress in machine learning.

Automated Theorem Proving Domain Generalization +1

Contrastive Explanations for Model Interpretability

1 code implementation2 Mar 2021 Alon Jacovi, Swabha Swayamdipta, Shauli Ravfogel, Yanai Elazar, Yejin Choi, Yoav Goldberg

Our method is based on projecting model representation to a latent space that captures only the features that are useful (to the model) to differentiate two potential decisions.

Text Classification

An Information Divergence Measure Between Neural Text and Human Text

2 code implementations2 Feb 2021 Krishna Pillutla, Swabha Swayamdipta, Rowan Zellers, John Thickstun, Sean Welleck, Yejin Choi, Zaid Harchaoui

As major progress is made in open-ended text generation, measuring how close machine-generated text is to human language remains a critical open problem.

Text Generation

Challenges in Automated Debiasing for Toxic Language Detection

2 code implementations EACL 2021 Xuhui Zhou, Maarten Sap, Swabha Swayamdipta, Noah A. Smith, Yejin Choi

Overall, our findings show that debiasing a model trained on biased toxic language data is not as effective as simply relabeling the data to remove existing biases.

Fairness Text Classification

GENIE: A Leaderboard for Human-in-the-Loop Evaluation of Text Generation

no code implementations17 Jan 2021 Daniel Khashabi, Gabriel Stanovsky, Jonathan Bragg, Nicholas Lourie, Jungo Kasai, Yejin Choi, Noah A. Smith, Daniel S. Weld

Leaderboards have eased model development for many NLP datasets by standardizing their evaluation and delegating it to an independent external repository.

Machine Translation Reading Comprehension +2

On-the-Fly Attention Modulation for Neural Generation

no code implementations2 Jan 2021 Yue Dong, Chandra Bhagavatula, Ximing Lu, Jena D. Hwang, Antoine Bosselut, Jackie Chi Kit Cheung, Yejin Choi

Despite considerable advancements with deep neural language models (LMs), neural text generation still suffers from degeneration: the generated text is repetitive, generic, self-contradictory, and often lacks commonsense.

Language Modelling Text Generation

VinVL: Revisiting Visual Representations in Vision-Language Models

4 code implementations CVPR 2021 Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, Jianfeng Gao

In our experiments we feed the visual features generated by the new object detection model into a Transformer-based VL fusion model \oscar \cite{li2020oscar}, and utilize an improved approach \short\ to pre-train the VL model and fine-tune it on a wide range of downstream VL tasks.

Object Detection

Analyzing Commonsense Emergence in Few-shot Knowledge Models

1 code implementation1 Jan 2021 Jeff Da, Ronan Le Bras, Ximing Lu, Yejin Choi, Antoine Bosselut

Our results show that commonsense knowledge models can rapidly adapt from limited examples, indicating that KG fine-tuning serves to learn an interface to encoded knowledge learned during pretraining.

Learning to Rationalize for Nonmonotonic Reasoning with Distant Supervision

no code implementations14 Dec 2020 Faeze Brahman, Vered Shwartz, Rachel Rudinger, Yejin Choi

In this paper, we investigate the extent to which neural models can reason about natural language rationales that explain model predictions, relying only on distant supervision with no additional annotation cost for human-written rationales.

Do Neural Language Models Overcome Reporting Bias?

1 code implementation COLING 2020 Vered Shwartz, Yejin Choi

Mining commonsense knowledge from corpora suffers from reporting bias, over-representing the rare at the expense of the trivial (Gordon and Van Durme, 2013).

Social Chemistry 101: Learning to Reason about Social and Moral Norms

no code implementations EMNLP 2020 Maxwell Forbes, Jena D. Hwang, Vered Shwartz, Maarten Sap, Yejin Choi

We present Social Chemistry, a new conceptual formalism to study people's everyday social norms and moral judgments over a rich spectrum of real life situations described in natural language.

PowerTransformer: Unsupervised Controllable Revision for Biased Language Correction

no code implementations EMNLP 2020 Xinyao Ma, Maarten Sap, Hannah Rashkin, Yejin Choi

Unconscious biases continue to be prevalent in modern text and media, calling for algorithms that can assist writers with bias correction.

GO FIGURE: A Meta Evaluation of Factuality in Summarization

no code implementations24 Oct 2020 Saadia Gabriel, Asli Celikyilmaz, Rahul Jha, Yejin Choi, Jianfeng Gao

While neural language models can generate text with remarkable fluency and coherence, controlling for factual correctness in generation remains an open research question.

Common Sense Reasoning Document Summarization +1

NeuroLogic Decoding: (Un)supervised Neural Text Generation with Predicate Logic Constraints

no code implementations NAACL 2021 Ximing Lu, Peter West, Rowan Zellers, Ronan Le Bras, Chandra Bhagavatula, Yejin Choi

While the dominant recipe for conditional text generation has been large-scale pretrained language models that are finetuned on the task-specific training data, such models do not learn to follow the underlying constraints reliably, even when supervised with large amounts of task-specific examples.

Conditional Text Generation

Reflective Decoding: Beyond Unidirectional Generation with Off-the-Shelf Language Models

no code implementations ACL 2021 Peter West, Ximing Lu, Ari Holtzman, Chandra Bhagavatula, Jena Hwang, Yejin Choi

In this paper, we present Reflective Decoding, a novel unsupervised algorithm that allows for direct application of unidirectional LMs to non-sequential tasks.

Conditional Text Generation Text Infilling

Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense Graphs

1 code implementation Findings of the Association for Computational Linguistics 2020 Ana Marasović, Chandra Bhagavatula, Jae Sung Park, Ronan Le Bras, Noah A. Smith, Yejin Choi

Natural language rationales could provide intuitive, higher-level explanations that are easily understandable by humans, complementing the more broadly studied lower-level explanations based on gradients or attention weights.

Language Modelling Natural Language Inference +4

COMET-ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs

no code implementations12 Oct 2020 Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Jeff Da, Keisuke Sakaguchi, Antoine Bosselut, Yejin Choi

Next, we show that ATOMIC 2020 is better suited for training knowledge models that can generate accurate, representative knowledge for new, unseen entities and events.

Knowledge Graphs Natural Language Understanding

Back to the Future: Unsupervised Backprop-based Decoding for Counterfactual and Abductive Commonsense Reasoning

1 code implementation EMNLP 2020 Lianhui Qin, Vered Shwartz, Peter West, Chandra Bhagavatula, Jena Hwang, Ronan Le Bras, Antoine Bosselut, Yejin Choi

Abductive and counterfactual reasoning, core abilities of everyday human cognition, require reasoning about what might have happened at time t, while conditioning on multiple contexts from the relative past and future.

Text Infilling

Paragraph-level Commonsense Transformers with Recurrent Memory

1 code implementation4 Oct 2020 Saadia Gabriel, Chandra Bhagavatula, Vered Shwartz, Ronan Le Bras, Maxwell Forbes, Yejin Choi

Human understanding of narrative texts requires making commonsense inferences beyond what is stated explicitly in the text.

RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models

no code implementations Findings of the Association for Computational Linguistics 2020 Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, Noah A. Smith

We investigate the extent to which pretrained LMs can be prompted to generate toxic language, and the effectiveness of controllable text generation algorithms at preventing such toxic degeneration.

Text Generation

Dataset Cartography: Mapping and Diagnosing Datasets with Training Dynamics

3 code implementations EMNLP 2020 Swabha Swayamdipta, Roy Schwartz, Nicholas Lourie, Yizhong Wang, Hannaneh Hajishirzi, Noah A. Smith, Yejin Choi

Experiments across four datasets show that these model-dependent measures reveal three distinct regions in the data map, each with pronounced characteristics.

Scruples: A Corpus of Community Ethical Judgments on 32,000 Real-Life Anecdotes

1 code implementation20 Aug 2020 Nicholas Lourie, Ronan Le Bras, Yejin Choi

As AI systems become an increasing part of people's everyday lives, it becomes ever more important that they understand people's ethical norms.

Commonsense Reasoning for Natural Language Processing

no code implementations ACL 2020 Maarten Sap, Vered Shwartz, Antoine Bosselut, Yejin Choi, Dan Roth

We organize this tutorial to provide researchers with the critical foundations and recent advances in commonsense representation and reasoning, in the hopes of casting a brighter light on this promising area of future research.

PlotMachines: Outline-Conditioned Generation with Dynamic Plot State Tracking

2 code implementations EMNLP 2020 Hannah Rashkin, Asli Celikyilmaz, Yejin Choi, Jianfeng Gao

We propose the task of outline-conditioned story generation: given an outline as a set of phrases that describe key characters and events to appear in a story, the task is to generate a coherent narrative that is consistent with the provided outline.

Story Generation

VisualCOMET: Reasoning about the Dynamic Context of a Still Image

no code implementations ECCV 2020 Jae Sung Park, Chandra Bhagavatula, Roozbeh Mottaghi, Ali Farhadi, Yejin Choi

In addition, we provide person-grounding (i. e., co-reference links) between people appearing in the image and people mentioned in the textual commonsense descriptions, allowing for tighter integration between images and text.

Visual Commonsense Reasoning

Procedural Reading Comprehension with Attribute-Aware Context Flow

no code implementations31 Mar 2020 Aida Amini, Antoine Bosselut, Bhavana Dalvi Mishra, Yejin Choi, Hannaneh Hajishirzi

Procedural texts often describe processes (e. g., photosynthesis and cooking) that happen over entities (e. g., light, food).

Reading Comprehension

Multi-View Learning for Vision-and-Language Navigation

no code implementations2 Mar 2020 Qiaolin Xia, Xiujun Li, Chunyuan Li, Yonatan Bisk, Zhifang Sui, Jianfeng Gao, Yejin Choi, Noah A. Smith

Learning to navigate in a visual environment following natural language instructions is a challenging task because natural language instructions are highly variable, ambiguous, and under-specified.

MULTI-VIEW LEARNING Vision and Language Navigation

Adversarial Filters of Dataset Biases

1 code implementation ICML 2020 Ronan Le Bras, Swabha Swayamdipta, Chandra Bhagavatula, Rowan Zellers, Matthew E. Peters, Ashish Sabharwal, Yejin Choi

Large neural models have demonstrated human-level performance on language and vision benchmarks, while their performance degrades considerably on adversarial or out-of-distribution samples.

Natural Language Inference

PIQA: Reasoning about Physical Commonsense in Natural Language

2 code implementations26 Nov 2019 Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, Yejin Choi

Questions requiring this kind of physical commonsense pose a challenge to today's natural language understanding systems.

Common Sense Reasoning Natural Language Understanding +1

Social Bias Frames: Reasoning about Social and Power Implications of Language

no code implementations ACL 2020 Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A. Smith, Yejin Choi

We introduce Social Bias Frames, a new conceptual formalism that aims to model the pragmatic frames in which people project social biases and stereotypes onto others.

Dynamic Neuro-Symbolic Knowledge Graph Construction for Zero-shot Commonsense Question Answering

no code implementations10 Nov 2019 Antoine Bosselut, Ronan Le Bras, Yejin Choi

Understanding narratives requires reasoning about implicit world knowledge related to the causes, effects, and states of situations described in text.

graph construction Knowledge Graphs +1

CommonGen: A Constrained Text Generation Challenge for Generative Commonsense Reasoning

2 code implementations Findings of the Association for Computational Linguistics 2020 Bill Yuchen Lin, Wangchunshu Zhou, Ming Shen, Pei Zhou, Chandra Bhagavatula, Yejin Choi, Xiang Ren

In this paper, we present a constrained text generation task, CommonGen associated with a benchmark dataset, to explicitly test machines for the ability of generative commonsense reasoning.

 Ranked #1 on Text Generation on CommonGen (CIDEr metric)

Common Sense Reasoning Question Answering +2

Commonsense Knowledge Base Completion with Structural and Semantic Context

no code implementations7 Oct 2019 Chaitanya Malaviya, Chandra Bhagavatula, Antoine Bosselut, Yejin Choi

Our results demonstrate the effectiveness of language model representations in boosting link prediction performance and the advantages of learning from local graph structure (+1. 5 points in MRR for ConceptNet) when training on subgraphs for computational efficiency.

Knowledge Base Completion Knowledge Graphs +3

BottleSum: Unsupervised and Self-supervised Sentence Summarization using the Information Bottleneck Principle

no code implementations IJCNLP 2019 Peter West, Ari Holtzman, Jan Buys, Yejin Choi

In this paper, we propose a novel approach to unsupervised sentence summarization by mapping the Information Bottleneck principle to a conditional language modelling objective: given a sentence, our approach seeks a compressed sentence that can best predict the next sentence.

Abstractive Text Summarization Extractive Summarization +2

Counterfactual Story Reasoning and Generation

1 code implementation IJCNLP 2019 Lianhui Qin, Antoine Bosselut, Ari Holtzman, Chandra Bhagavatula, Elizabeth Clark, Yejin Choi

Counterfactual reasoning requires predicting how alternative events, contrary to what actually happened, might have resulted in different outcomes.

Text Generation

Robust Navigation with Language Pretraining and Stochastic Sampling

1 code implementation IJCNLP 2019 Xiujun Li, Chunyuan Li, Qiaolin Xia, Yonatan Bisk, Asli Celikyilmaz, Jianfeng Gao, Noah Smith, Yejin Choi

Core to the vision-and-language navigation (VLN) challenge is building robust instruction representations and action decoding schemes, which can generalize well to previously unseen instructions and environments.

Vision and Language Navigation

Cosmos QA: Machine Reading Comprehension with Contextual Commonsense Reasoning

no code implementations IJCNLP 2019 Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, Yejin Choi

In this paper, we introduce Cosmos QA, a large-scale dataset of 35, 600 problems that require commonsense-based reading comprehension, formulated as multiple-choice questions.

Machine Reading Comprehension

Do Neural Language Representations Learn Physical Commonsense?

1 code implementation8 Aug 2019 Maxwell Forbes, Ari Holtzman, Yejin Choi

Humans understand language based on the rich background knowledge about how the physical world works, which in turn allows us to reason about the physical world through language.

Natural Language Inference

WinoGrande: An Adversarial Winograd Schema Challenge at Scale

2 code implementations24 Jul 2019 Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, Yejin Choi

The key steps of the dataset construction consist of (1) a carefully designed crowdsourcing procedure, followed by (2) systematic bias reduction using a novel AfLite algorithm that generalizes human-detectable word associations to machine-detectable embedding associations.

Transfer Learning

Discourse Understanding and Factual Consistency in Abstractive Summarization

no code implementations EACL 2021 Saadia Gabriel, Antoine Bosselut, Jeff Da, Ari Holtzman, Jan Buys, Kyle Lo, Asli Celikyilmaz, Yejin Choi

We introduce a general framework for abstractive summarization with factual consistency and distinct modeling of the narrative flow in an output summary.

Abstractive Text Summarization

The Risk of Racial Bias in Hate Speech Detection

no code implementations ACL 2019 Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, Noah A. Smith

We investigate how annotators{'} insensitivity to differences in dialect can lead to racial bias in automatic hate speech detection models, potentially amplifying harm against minority populations.

Hate Speech Detection

COMET: Commonsense Transformers for Automatic Knowledge Graph Construction

2 code implementations ACL 2019 Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, Yejin Choi

We present the first comprehensive study on automatic knowledge base construction for two prevalent commonsense knowledge graphs: ATOMIC (Sap et al., 2019) and ConceptNet (Speer et al., 2017).

graph construction Knowledge Graphs

Conversing by Reading: Contentful Neural Conversation with On-demand Machine Reading

1 code implementation ACL 2019 Lianhui Qin, Michel Galley, Chris Brockett, Xiaodong Liu, Xiang Gao, Bill Dolan, Yejin Choi, Jianfeng Gao

Although neural conversation models are effective in learning how to produce fluent responses, their primary challenge lies in knowing what to say to make the conversation contentful and non-vacuous.

Reading Comprehension

Benchmarking Hierarchical Script Knowledge

1 code implementation NAACL 2019 Yonatan Bisk, Jan Buys, Karl Pichotta, Yejin Choi

Understanding procedural language requires reasoning about both hierarchical and temporal relations between events.

Efficient Adaptation of Pretrained Transformers for Abstractive Summarization

2 code implementations1 Jun 2019 Andrew Hoang, Antoine Bosselut, Asli Celikyilmaz, Yejin Choi

Large-scale learning of transformer language models has yielded improvements on a variety of natural language understanding tasks.

Abstractive Text Summarization Natural Language Understanding

MathQA: Towards Interpretable Math Word Problem Solving with Operation-Based Formalisms

no code implementations NAACL 2019 Aida Amini, Saadia Gabriel, Peter Lin, Rik Koncel-Kedziorski, Yejin Choi, Hannaneh Hajishirzi

We introduce a new representation language to model precise operation programs corresponding to each math problem that aim to improve both the performance and the interpretability of the learned models.

Math Word Problem Solving

Defending Against Neural Fake News

4 code implementations NeurIPS 2019 Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, Yejin Choi

We find that best current discriminators can classify neural fake news from real, human-written, news with 73% accuracy, assuming access to a moderate level of training data.

Fake News Detection Text Generation

HellaSwag: Can a Machine Really Finish Your Sentence?

1 code implementation ACL 2019 Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, Yejin Choi

In this paper, we show that commonsense inference still proves difficult for even state-of-the-art models, by presenting HellaSwag, a new challenge dataset.

Natural Language Inference

SocialIQA: Commonsense Reasoning about Social Interactions

no code implementations22 Apr 2019 Maarten Sap, Hannah Rashkin, Derek Chen, Ronan LeBras, Yejin Choi

We introduce Social IQa, the first largescale benchmark for commonsense reasoning about social situations.

Question Answering Transfer Learning

The Curious Case of Neural Text Degeneration

9 code implementations ICLR 2020 Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, Yejin Choi

Despite considerable advancements with deep neural language models, the enigma of neural text degeneration persists when these models are tested as text generators.

Language Modelling

Tactical Rewind: Self-Correction via Backtracking in Vision-and-Language Navigation

1 code implementation CVPR 2019 Liyiming Ke, Xiujun Li, Yonatan Bisk, Ari Holtzman, Zhe Gan, Jingjing Liu, Jianfeng Gao, Yejin Choi, Siddhartha Srinivasa

We present the Frontier Aware Search with backTracking (FAST) Navigator, a general framework for action decoding, that achieves state-of-the-art results on the Room-to-Room (R2R) Vision-and-Language navigation challenge of Anderson et.

Vision and Language Navigation Vision-Language Navigation

DREAM: A Challenge Dataset and Models for Dialogue-Based Reading Comprehension

1 code implementation1 Feb 2019 Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi, Claire Cardie

DREAM is likely to present significant challenges for existing reading comprehension systems: 84% of answers are non-extractive, 85% of questions require reasoning beyond a single sentence, and 34% of questions also involve commonsense knowledge.

Dialogue Understanding Reading Comprehension

From Recognition to Cognition: Visual Commonsense Reasoning

3 code implementations CVPR 2019 Rowan Zellers, Yonatan Bisk, Ali Farhadi, Yejin Choi

While this task is easy for humans, it is tremendously difficult for today's vision systems, requiring higher-order cognition and commonsense reasoning about the world.

Multiple choice QA Visual Commonsense Reasoning

Early Fusion for Goal Directed Robotic Vision

no code implementations21 Nov 2018 Aaron Walsman, Yonatan Bisk, Saadia Gabriel, Dipendra Misra, Yoav Artzi, Yejin Choi, Dieter Fox

Building perceptual systems for robotics which perform well under tight computational budgets requires novel architectures which rethink the traditional computer vision pipeline.

Imitation Learning

ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning

1 code implementation31 Oct 2018 Maarten Sap, Ronan LeBras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A. Smith, Yejin Choi

We present ATOMIC, an atlas of everyday commonsense reasoning, organized through 877k textual descriptions of inferential knowledge.

Hierarchical structure

QuAC: Question Answering in Context

no code implementations EMNLP 2018 Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, Luke Zettlemoyer

We present QuAC, a dataset for Question Answering in Context that contains 14K information-seeking QA dialogs (100K questions in total).

Information Seeking Question Answering +1

Neural Metaphor Detection in Context

1 code implementation EMNLP 2018 Ge Gao, Eunsol Choi, Yejin Choi, Luke Zettlemoyer

We present end-to-end neural models for detecting metaphorical word use in context.

QuAC : Question Answering in Context

no code implementations21 Aug 2018 Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, Luke Zettlemoyer

We present QuAC, a dataset for Question Answering in Context that contains 14K information-seeking QA dialogs (100K questions in total).

Information Seeking Question Answering +1

SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference

1 code implementation EMNLP 2018 Rowan Zellers, Yonatan Bisk, Roy Schwartz, Yejin Choi

Given a partial description like "she opened the hood of the car," humans can reason about the situation and anticipate what might come next ("then, she examined the engine").

Common Sense Reasoning Natural Language Inference +1

Ultra-Fine Entity Typing

no code implementations ACL 2018 Eunsol Choi, Omer Levy, Yejin Choi, Luke Zettlemoyer

We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e. g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity.

Entity Linking Entity Typing

Balancing Shared Autonomy with Human-Robot Communication

no code implementations20 May 2018 Rosario Scalise, Yonatan Bisk, Maxwell Forbes, Daqing Yi, Yejin Choi, Siddhartha Srinivasa

Robotic agents that share autonomy with a human should leverage human domain knowledge and account for their preferences when completing a task.

Event2Mind: Commonsense Inference on Events, Intents, and Reactions

no code implementations ACL 2018 Hannah Rashkin, Maarten Sap, Emily Allaway, Noah A. Smith, Yejin Choi

We investigate a new commonsense inference task: given an event described in a short free-form text ("X drinks coffee in the morning"), a system reasons about the likely intents ("X wants to stay awake") and reactions ("X feels alert") of the event's participants.

Common Sense Reasoning

Modeling Naive Psychology of Characters in Simple Commonsense Stories

no code implementations ACL 2018 Hannah Rashkin, Antoine Bosselut, Maarten Sap, Kevin Knight, Yejin Choi

Understanding a narrative requires reading between the lines and reasoning about the unspoken but obvious implications about events and people's mental states - a capability that is trivial for humans but remarkably hard for machines.

Emotion Classification

Learning to Write with Cooperative Discriminators

2 code implementations ACL 2018 Ari Holtzman, Jan Buys, Maxwell Forbes, Antoine Bosselut, David Golub, Yejin Choi

Recurrent Neural Networks (RNNs) are powerful autoregressive sequence models, but when used to generate natural language their output tends to be overly generic, repetitive, and self-contradictory.

Discourse-Aware Neural Rewards for Coherent Text Generation

no code implementations NAACL 2018 Antoine Bosselut, Asli Celikyilmaz, Xiaodong He, Jianfeng Gao, Po-Sen Huang, Yejin Choi

In this paper, we investigate the use of discourse-aware rewards with reinforcement learning to guide a model to generate long, coherent text.

Sentence Ordering Text Generation

Deep Communicating Agents for Abstractive Summarization

no code implementations NAACL 2018 Asli Celikyilmaz, Antoine Bosselut, Xiaodong He, Yejin Choi

We present deep communicating agents in an encoder-decoder architecture to address the challenges of representing a long document for abstractive summarization.

Ranked #18 on Abstractive Text Summarization on CNN / Daily Mail (using extra training data)

Abstractive Text Summarization

Learning to Write by Learning the Objective

no code implementations ICLR 2018 Ari Holtzman, Jan Buys, Maxwell Forbes, Antoine Bosselut, Yejin Choi

Human evaluation demonstrates that text generated by the resulting generator is preferred over that of baselines by a large margin and significantly enhances the overall coherence, style, and information content of the generated text.

Language Modelling

Learning Interpretable Spatial Operations in a Rich 3D Blocks World

no code implementations10 Dec 2017 Yonatan Bisk, Kevin J. Shih, Yejin Choi, Daniel Marcu

In this paper, we study the problem of mapping natural language instructions to complex spatial actions in a 3D blocks world.

Neural Motifs: Scene Graph Parsing with Global Context

6 code implementations CVPR 2018 Rowan Zellers, Mark Yatskar, Sam Thomson, Yejin Choi

We then introduce Stacked Motif Networks, a new architecture designed to capture higher order motifs in scene graphs that further improves over our strong baseline by an average 7. 1% relative gain.

Simulating Action Dynamics with Neural Process Networks

no code implementations ICLR 2018 Antoine Bosselut, Omer Levy, Ari Holtzman, Corin Ennis, Dieter Fox, Yejin Choi

Understanding procedural language requires anticipating the causal effects of actions, even when they are not explicitly stated.

Zero-Shot Activity Recognition with Verb Attribute Induction

2 code implementations EMNLP 2017 Rowan Zellers, Yejin Choi

In this paper, we investigate large-scale zero-shot activity recognition by modeling the visual and linguistic attributes of action verbs.

Activity Recognition

Verb Physics: Relative Physical Knowledge of Actions and Objects

no code implementations ACL 2017 Maxwell Forbes, Yejin Choi

Learning commonsense knowledge from natural language text is nontrivial due to reporting bias: people rarely state the obvious, e. g., "My house is bigger than me."

Detecting English Writing Styles For Non Native Speakers

no code implementations24 Apr 2017 Yanging Chen, Rami Al-Rfou', Yejin Choi

This paper presents the first attempt, up to our knowledge, to classify English writing styles on this scale with the challenge of classifying day to day language written by writers with different backgrounds covering various areas of topics. The paper proposes simple machine learning algorithms and simple to generate features to solve hard problems.

Story Cloze Task: UW NLP System

no code implementations WS 2017 Roy Schwartz, Maarten Sap, Ioannis Konstas, Leila Zilles, Yejin Choi, Noah A. Smith

This paper describes University of Washington NLP{'}s submission for the Linking Models of Lexical, Sentential and Discourse-level Semantics (LSDSem 2017) shared task{---}the Story Cloze Task.

Language Modelling

Are Elephants Bigger than Butterflies? Reasoning about Sizes of Objects

no code implementations2 Feb 2016 Hessam Bagherinezhad, Hannaneh Hajishirzi, Yejin Choi, Ali Farhadi

In this paper, we introduce a method to automatically infer object sizes, leveraging visual and textual information from web.

Visual Reasoning

Segment-Phrase Table for Semantic Segmentation, Visual Entailment and Paraphrasing

no code implementations ICCV 2015 Hamid Izadinia, Fereshteh Sadeghi, Santosh Kumar Divvala, Yejin Choi, Ali Farhadi

Next, we show that the association of high-quality segmentations to textual phrases aids in richer semantic understanding and reasoning of these textual phrases.

Natural Language Understanding Object Recognition +2

Connotation Frames: A Data-Driven Investigation

no code implementations ACL 2016 Hannah Rashkin, Sameer Singh, Yejin Choi

Through a particular choice of a predicate (e. g., "x violated y"), a writer can subtly connote a range of implied sentiments and presupposed facts about the entities x and y: (1) writer's perspective: projecting x as an "antagonist"and y as a "victim", (2) entities' perspective: y probably dislikes x, (3) effect: something bad happened to y, (4) value: y is something valuable, and (5) mental state: y is distressed by the event.

TreeTalk: Composition and Compression of Trees for Image Descriptions

no code implementations TACL 2014 Polina Kuznetsova, Vicente Ordonez, Tamara L. Berg, Yejin Choi

We present a new tree based approach to composing expressive image descriptions that makes use of naturally occuring web images with captions.

Image Captioning Image Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.