Search Results for author: Ronan Le Bras

Found 30 papers, 16 papers with code

Generated Knowledge Prompting for Commonsense Reasoning

no code implementations15 Oct 2021 Jiacheng Liu, Alisa Liu, Ximing Lu, Sean Welleck, Peter West, Ronan Le Bras, Yejin Choi, Hannaneh Hajishirzi

Despite their ability to capture large amount of knowledge during pretraining, large-scale language models often benefit from incorporating external knowledge bases, especially on commonsense reasoning tasks.

Language Modelling

Delphi: Towards Machine Ethics and Norms

no code implementations14 Oct 2021 Liwei Jiang, Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Maxwell Forbes, Jon Borchardt, Jenny Liang, Oren Etzioni, Maarten Sap, Yejin Choi

We identify four underlying challenges towards machine ethics and norms: (1) an understanding of moral precepts and social norms; (2) the ability to perceive real-world situations visually or by reading natural language descriptions; (3) commonsense reasoning to anticipate the outcome of alternative actions in different contexts; (4) most importantly, the ability to make ethical judgments given the interplay between competing values and their grounding in different contexts (e. g., the right to freedom of expression vs. preventing the spread of fake news).

CLIPScore: A Reference-free Evaluation Metric for Image Captioning

no code implementations18 Apr 2021 Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, Yejin Choi

Image captioning has conventionally relied on reference-based automatic evaluations, where machine captions are compared against captions written by humans.

Image Captioning

proScript: Partially Ordered Scripts Generation via Pre-trained Language Models

no code implementations16 Apr 2021 Keisuke Sakaguchi, Chandra Bhagavatula, Ronan Le Bras, Niket Tandon, Peter Clark, Yejin Choi

Scripts - standardized event sequences describing typical everyday activities - have been shown to help understand narratives by providing expectations, resolving ambiguity, and filling in unstated information.

Text Generation

NaturalProofs: Mathematical Theorem Proving in Natural Language

1 code implementation24 Mar 2021 Sean Welleck, Jiacheng Liu, Ronan Le Bras, Hannaneh Hajishirzi, Yejin Choi, Kyunghyun Cho

Understanding and creating mathematics using natural mathematical language - the mixture of symbolic and natural language used by humans - is a challenging and important problem for driving progress in machine learning.

Automated Theorem Proving Domain Generalization +1

UNICORN on RAINBOW: A Universal Commonsense Reasoning Model on a New Multitask Benchmark

1 code implementation24 Mar 2021 Nicholas Lourie, Ronan Le Bras, Chandra Bhagavatula, Yejin Choi

First, we propose a new multitask benchmark, RAINBOW, to promote research on commonsense models that generalize well over multiple tasks and datasets.

Knowledge Graphs Transfer Learning

Analyzing Commonsense Emergence in Few-shot Knowledge Models

1 code implementation1 Jan 2021 Jeff Da, Ronan Le Bras, Ximing Lu, Yejin Choi, Antoine Bosselut

Our results show that commonsense knowledge models can rapidly adapt from limited examples, indicating that KG fine-tuning serves to learn an interface to encoded knowledge learned during pretraining.

NeuroLogic Decoding: (Un)supervised Neural Text Generation with Predicate Logic Constraints

no code implementations NAACL 2021 Ximing Lu, Peter West, Rowan Zellers, Ronan Le Bras, Chandra Bhagavatula, Yejin Choi

While the dominant recipe for conditional text generation has been large-scale pretrained language models that are finetuned on the task-specific training data, such models do not learn to follow the underlying constraints reliably, even when supervised with large amounts of task-specific examples.

Conditional Text Generation

Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense Graphs

1 code implementation Findings of the Association for Computational Linguistics 2020 Ana Marasović, Chandra Bhagavatula, Jae Sung Park, Ronan Le Bras, Noah A. Smith, Yejin Choi

Natural language rationales could provide intuitive, higher-level explanations that are easily understandable by humans, complementing the more broadly studied lower-level explanations based on gradients or attention weights.

Language Modelling Natural Language Inference +4

COMET-ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs

no code implementations12 Oct 2020 Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Jeff Da, Keisuke Sakaguchi, Antoine Bosselut, Yejin Choi

Next, we show that ATOMIC 2020 is better suited for training knowledge models that can generate accurate, representative knowledge for new, unseen entities and events.

Knowledge Graphs Natural Language Understanding

Back to the Future: Unsupervised Backprop-based Decoding for Counterfactual and Abductive Commonsense Reasoning

1 code implementation EMNLP 2020 Lianhui Qin, Vered Shwartz, Peter West, Chandra Bhagavatula, Jena Hwang, Ronan Le Bras, Antoine Bosselut, Yejin Choi

Abductive and counterfactual reasoning, core abilities of everyday human cognition, require reasoning about what might have happened at time t, while conditioning on multiple contexts from the relative past and future.

Text Infilling

Paragraph-level Commonsense Transformers with Recurrent Memory

1 code implementation4 Oct 2020 Saadia Gabriel, Chandra Bhagavatula, Vered Shwartz, Ronan Le Bras, Maxwell Forbes, Yejin Choi

Human understanding of narrative texts requires making commonsense inferences beyond what is stated explicitly in the text.

Scruples: A Corpus of Community Ethical Judgments on 32,000 Real-Life Anecdotes

1 code implementation20 Aug 2020 Nicholas Lourie, Ronan Le Bras, Yejin Choi

As AI systems become an increasing part of people's everyday lives, it becomes ever more important that they understand people's ethical norms.

Adversarial Filters of Dataset Biases

1 code implementation ICML 2020 Ronan Le Bras, Swabha Swayamdipta, Chandra Bhagavatula, Rowan Zellers, Matthew E. Peters, Ashish Sabharwal, Yejin Choi

Large neural models have demonstrated human-level performance on language and vision benchmarks, while their performance degrades considerably on adversarial or out-of-distribution samples.

Natural Language Inference

PIQA: Reasoning about Physical Commonsense in Natural Language

2 code implementations26 Nov 2019 Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, Yejin Choi

Questions requiring this kind of physical commonsense pose a challenge to today's natural language understanding systems.

Common Sense Reasoning Natural Language Understanding +1

Dynamic Neuro-Symbolic Knowledge Graph Construction for Zero-shot Commonsense Question Answering

no code implementations10 Nov 2019 Antoine Bosselut, Ronan Le Bras, Yejin Choi

Understanding narratives requires reasoning about implicit world knowledge related to the causes, effects, and states of situations described in text.

graph construction Knowledge Graphs +1

Cosmos QA: Machine Reading Comprehension with Contextual Commonsense Reasoning

no code implementations IJCNLP 2019 Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, Yejin Choi

In this paper, we introduce Cosmos QA, a large-scale dataset of 35, 600 problems that require commonsense-based reading comprehension, formulated as multiple-choice questions.

Machine Reading Comprehension

WinoGrande: An Adversarial Winograd Schema Challenge at Scale

2 code implementations24 Jul 2019 Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, Yejin Choi

The key steps of the dataset construction consist of (1) a carefully designed crowdsourcing procedure, followed by (2) systematic bias reduction using a novel AfLite algorithm that generalizes human-detectable word associations to machine-detectable embedding associations.

Transfer Learning

Beyond Sentential Semantic Parsing: Tackling the Math SAT with a Cascade of Tree Transducers

no code implementations EMNLP 2017 Mark Hopkins, Cristian Petrescu-Prahova, Roie Levin, Ronan Le Bras, Alvaro Herrasti, Vidur Joshi

We present an approach for answering questions that span multiple sentences and exhibit sophisticated cross-sentence anaphoric phenomena, evaluating on a rich source of such questions {--} the math portion of the Scholastic Aptitude Test (SAT).

Coreference Resolution Question Answering +1

Phase-Mapper: An AI Platform to Accelerate High Throughput Materials Discovery

1 code implementation3 Oct 2016 Yexiang Xue, Junwen Bai, Ronan Le Bras, Brendan Rappazzo, Richard Bernstein, Johan Bjorck, Liane Longpre, Santosh K. Suram, Robert B. van Dover, John Gregoire, Carla P. Gomes

A key problem in materials discovery, the phase map identification problem, involves the determination of the crystal phase diagram from the materials' composition and structural characterization data.

Variable Elimination in the Fourier Domain

no code implementations17 Aug 2015 Yexiang Xue, Stefano Ermon, Ronan Le Bras, Carla P. Gomes, Bart Selman

The ability to represent complex high dimensional probability distributions in a compact form is one of the key insights in the field of graphical models.

Pattern Decomposition with Complex Combinatorial Constraints: Application to Materials Discovery

no code implementations27 Nov 2014 Stefano Ermon, Ronan Le Bras, Santosh K. Suram, John M. Gregoire, Carla Gomes, Bart Selman, Robert B. van Dover

Identifying important components or factors in large amounts of noisy data is a key problem in machine learning and data mining.

Cannot find the paper you are looking for? You can Submit a new open access paper.