Search Results for author: Ronan Le Bras

Found 40 papers, 25 papers with code

proScript: Partially Ordered Scripts Generation

no code implementations Findings (EMNLP) 2021 Keisuke Sakaguchi, Chandra Bhagavatula, Ronan Le Bras, Niket Tandon, Peter Clark, Yejin Choi

Scripts – prototypical event sequences describing everyday activities – have been shown to help understand narratives by providing expectations, resolving ambiguity, and filling in unstated information.

Text Generation

RealTime QA: What's the Answer Right Now?

1 code implementation27 Jul 2022 Jungo Kasai, Keisuke Sakaguchi, Yoichi Takahashi, Ronan Le Bras, Akari Asai, Xinyan Yu, Dragomir Radev, Noah A. Smith, Yejin Choi, Kentaro Inui

We introduce RealTime QA, a dynamic question answering (QA) platform that announces questions and evaluates systems on a regular basis (weekly in this version).

Information Retrieval Pretrained Language Models +1

Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations

no code implementations24 May 2022 JaeHun Jung, Lianhui Qin, Sean Welleck, Faeze Brahman, Chandra Bhagavatula, Ronan Le Bras, Yejin Choi

Despite their impressive capabilities, large pre-trained language models (LMs) struggle with consistent reasoning; recently, prompting LMs to generate explanations that self-guide the inference has emerged as a promising direction to amend this.

Twist Decoding: Diverse Generators Guide Each Other

1 code implementation19 May 2022 Jungo Kasai, Keisuke Sakaguchi, Ronan Le Bras, Hao Peng, Ximing Lu, Dragomir Radev, Yejin Choi, Noah A. Smith

Natural language generation technology has recently seen remarkable progress with large-scale training, and many natural language applications are now built upon a wide range of generation models.

Machine Translation Text Generation

CommonsenseQA 2.0: Exposing the Limits of AI through Gamification

no code implementations14 Jan 2022 Alon Talmor, Ori Yoran, Ronan Le Bras, Chandra Bhagavatula, Yoav Goldberg, Yejin Choi, Jonathan Berant

Constructing benchmarks that test the abilities of modern natural language understanding models is difficult - pre-trained language models exploit artifacts in benchmarks to achieve human parity, but still fail on adversarial examples and make errors that demonstrate a lack of common sense.

Common Sense Reasoning Natural Language Understanding

NeuroLogic A*esque Decoding: Constrained Text Generation with Lookahead Heuristics

1 code implementation NAACL 2022 Ximing Lu, Sean Welleck, Peter West, Liwei Jiang, Jungo Kasai, Daniel Khashabi, Ronan Le Bras, Lianhui Qin, Youngjae Yu, Rowan Zellers, Noah A. Smith, Yejin Choi

To enable constrained generation, we build on NeuroLogic decoding (Lu et al., 2021), combining its flexibility in incorporating logical constraints with A*esque estimates of future constraint satisfaction.

Machine Translation Table-to-Text Generation

Bidimensional Leaderboards: Generate and Evaluate Language Hand in Hand

2 code implementations NAACL 2022 Jungo Kasai, Keisuke Sakaguchi, Ronan Le Bras, Lavinia Dunagan, Jacob Morrison, Alexander R. Fabbri, Yejin Choi, Noah A. Smith

We therefore propose a generalization of leaderboards, bidimensional leaderboards (Billboards), that simultaneously tracks progress in language generation models and metrics for their evaluation.

Image Captioning Machine Translation +1

Generated Knowledge Prompting for Commonsense Reasoning

1 code implementation ACL 2022 Jiacheng Liu, Alisa Liu, Ximing Lu, Sean Welleck, Peter West, Ronan Le Bras, Yejin Choi, Hannaneh Hajishirzi

It remains an open question whether incorporating external knowledge benefits commonsense reasoning while maintaining the flexibility of pretrained sequence models.

Language Modelling

proScript: Partially Ordered Scripts Generation via Pre-trained Language Models

no code implementations16 Apr 2021 Keisuke Sakaguchi, Chandra Bhagavatula, Ronan Le Bras, Niket Tandon, Peter Clark, Yejin Choi

Scripts - standardized event sequences describing typical everyday activities - have been shown to help understand narratives by providing expectations, resolving ambiguity, and filling in unstated information.

Text Generation

NaturalProofs: Mathematical Theorem Proving in Natural Language

1 code implementation24 Mar 2021 Sean Welleck, Jiacheng Liu, Ronan Le Bras, Hannaneh Hajishirzi, Yejin Choi, Kyunghyun Cho

Understanding and creating mathematics using natural mathematical language - the mixture of symbolic and natural language used by humans - is a challenging and important problem for driving progress in machine learning.

Automated Theorem Proving Domain Generalization +1

UNICORN on RAINBOW: A Universal Commonsense Reasoning Model on a New Multitask Benchmark

1 code implementation24 Mar 2021 Nicholas Lourie, Ronan Le Bras, Chandra Bhagavatula, Yejin Choi

First, we propose a new multitask benchmark, RAINBOW, to promote research on commonsense models that generalize well over multiple tasks and datasets.

HellaSwag Knowledge Graphs +4

Analyzing Commonsense Emergence in Few-shot Knowledge Models

1 code implementation AKBC 2021 Jeff Da, Ronan Le Bras, Ximing Lu, Yejin Choi, Antoine Bosselut

Our results show that commonsense knowledge models can rapidly adapt from limited examples, indicating that KG fine-tuning serves to learn an interface to encoded knowledge learned during pretraining.

Pretrained Language Models

NeuroLogic Decoding: (Un)supervised Neural Text Generation with Predicate Logic Constraints

no code implementations NAACL 2021 Ximing Lu, Peter West, Rowan Zellers, Ronan Le Bras, Chandra Bhagavatula, Yejin Choi

While the dominant recipe for conditional text generation has been large-scale pretrained language models that are finetuned on the task-specific training data, such models do not learn to follow the underlying constraints reliably, even when supervised with large amounts of task-specific examples.

Conditional Text Generation Pretrained Language Models

Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense Graphs

1 code implementation Findings of the Association for Computational Linguistics 2020 Ana Marasović, Chandra Bhagavatula, Jae Sung Park, Ronan Le Bras, Noah A. Smith, Yejin Choi

Natural language rationales could provide intuitive, higher-level explanations that are easily understandable by humans, complementing the more broadly studied lower-level explanations based on gradients or attention weights.

Language Modelling Natural Language Inference +5

Back to the Future: Unsupervised Backprop-based Decoding for Counterfactual and Abductive Commonsense Reasoning

1 code implementation EMNLP 2020 Lianhui Qin, Vered Shwartz, Peter West, Chandra Bhagavatula, Jena Hwang, Ronan Le Bras, Antoine Bosselut, Yejin Choi

Abductive and counterfactual reasoning, core abilities of everyday human cognition, require reasoning about what might have happened at time t, while conditioning on multiple contexts from the relative past and future.

Text Infilling

COMET-ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs

no code implementations12 Oct 2020 Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Jeff Da, Keisuke Sakaguchi, Antoine Bosselut, Yejin Choi

Next, we show that ATOMIC 2020 is better suited for training knowledge models that can generate accurate, representative knowledge for new, unseen entities and events.

Knowledge Graphs Natural Language Understanding +1

Paragraph-level Commonsense Transformers with Recurrent Memory

1 code implementation4 Oct 2020 Saadia Gabriel, Chandra Bhagavatula, Vered Shwartz, Ronan Le Bras, Maxwell Forbes, Yejin Choi

Human understanding of narrative texts requires making commonsense inferences beyond what is stated explicitly in the text.

Scruples: A Corpus of Community Ethical Judgments on 32,000 Real-Life Anecdotes

1 code implementation20 Aug 2020 Nicholas Lourie, Ronan Le Bras, Yejin Choi

As AI systems become an increasing part of people's everyday lives, it becomes ever more important that they understand people's ethical norms.

Ethics

Adversarial Filters of Dataset Biases

1 code implementation ICML 2020 Ronan Le Bras, Swabha Swayamdipta, Chandra Bhagavatula, Rowan Zellers, Matthew E. Peters, Ashish Sabharwal, Yejin Choi

Large neural models have demonstrated human-level performance on language and vision benchmarks, while their performance degrades considerably on adversarial or out-of-distribution samples.

Natural Language Inference

PIQA: Reasoning about Physical Commonsense in Natural Language

2 code implementations26 Nov 2019 Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, Yejin Choi

Questions requiring this kind of physical commonsense pose a challenge to today's natural language understanding systems.

Natural Language Understanding Physical Commonsense Reasoning +2

Dynamic Neuro-Symbolic Knowledge Graph Construction for Zero-shot Commonsense Question Answering

no code implementations10 Nov 2019 Antoine Bosselut, Ronan Le Bras, Yejin Choi

Understanding narratives requires reasoning about implicit world knowledge related to the causes, effects, and states of situations described in text.

graph construction Knowledge Graphs +2

Cosmos QA: Machine Reading Comprehension with Contextual Commonsense Reasoning

no code implementations IJCNLP 2019 Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, Yejin Choi

In this paper, we introduce Cosmos QA, a large-scale dataset of 35, 600 problems that require commonsense-based reading comprehension, formulated as multiple-choice questions.

Machine Reading Comprehension Multiple-choice

WinoGrande: An Adversarial Winograd Schema Challenge at Scale

2 code implementations24 Jul 2019 Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, Yejin Choi

The key steps of the dataset construction consist of (1) a carefully designed crowdsourcing procedure, followed by (2) systematic bias reduction using a novel AfLite algorithm that generalizes human-detectable word associations to machine-detectable embedding associations.

Transfer Learning Winogrande

Beyond Sentential Semantic Parsing: Tackling the Math SAT with a Cascade of Tree Transducers

no code implementations EMNLP 2017 Mark Hopkins, Cristian Petrescu-Prahova, Roie Levin, Ronan Le Bras, Alvaro Herrasti, Vidur Joshi

We present an approach for answering questions that span multiple sentences and exhibit sophisticated cross-sentence anaphoric phenomena, evaluating on a rich source of such questions {--} the math portion of the Scholastic Aptitude Test (SAT).

Coreference Resolution Question Answering +1

Phase-Mapper: An AI Platform to Accelerate High Throughput Materials Discovery

1 code implementation3 Oct 2016 Yexiang Xue, Junwen Bai, Ronan Le Bras, Brendan Rappazzo, Richard Bernstein, Johan Bjorck, Liane Longpre, Santosh K. Suram, Robert B. van Dover, John Gregoire, Carla P. Gomes

A key problem in materials discovery, the phase map identification problem, involves the determination of the crystal phase diagram from the materials' composition and structural characterization data.

Variable Elimination in the Fourier Domain

no code implementations17 Aug 2015 Yexiang Xue, Stefano Ermon, Ronan Le Bras, Carla P. Gomes, Bart Selman

The ability to represent complex high dimensional probability distributions in a compact form is one of the key insights in the field of graphical models.

Pattern Decomposition with Complex Combinatorial Constraints: Application to Materials Discovery

no code implementations27 Nov 2014 Stefano Ermon, Ronan Le Bras, Santosh K. Suram, John M. Gregoire, Carla Gomes, Bart Selman, Robert B. van Dover

Identifying important components or factors in large amounts of noisy data is a key problem in machine learning and data mining.

Cannot find the paper you are looking for? You can Submit a new open access paper.