Search Results for author: Antoine Bosselut

Found 34 papers, 12 papers with code

Fast Model Editing at Scale

1 code implementation21 Oct 2021 Eric Mitchell, Charles Lin, Antoine Bosselut, Chelsea Finn, Christopher D. Manning

To enable easy post-hoc editing at scale, we propose Model Editor Networks with Gradient Decomposition (MEND), a collection of small auxiliary editing networks that use a single desired input-output pair to make fast, local edits to a pre-trained model.

Language Modelling

On the Opportunities and Risks of Foundation Models

1 code implementation16 Aug 2021 Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Kohd, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, aditi raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, Percy Liang

AI is undergoing a paradigm shift with the rise of models (e. g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks.

Transfer Learning

``I'm Not Mad'': Commonsense Implications of Negation and Contradiction

no code implementations NAACL 2021 Liwei Jiang, Antoine Bosselut, Chandra Bhagavatula, Yejin Choi

In this paper, we present the first comprehensive study focusing on commonsense implications of negated statements and contradictions.

Natural Language Inference

QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering

3 code implementations NAACL 2021 Michihiro Yasunaga, Hongyu Ren, Antoine Bosselut, Percy Liang, Jure Leskovec

The problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs) presents two challenges: given a QA context (question and answer choice), methods need to (i) identify relevant knowledge from large KGs, and (ii) perform joint reasoning over the QA context and KG.

Common Sense Reasoning Graph Representation Learning +4

"I'm Not Mad": Commonsense Implications of Negation and Contradiction

no code implementations13 Apr 2021 Liwei Jiang, Antoine Bosselut, Chandra Bhagavatula, Yejin Choi

In this paper, we present the first comprehensive study focusing on commonsense implications of negated statements and contradictions.

Natural Language Inference

On-the-Fly Attention Modulation for Neural Generation

no code implementations Findings (ACL) 2021 Yue Dong, Chandra Bhagavatula, Ximing Lu, Jena D. Hwang, Antoine Bosselut, Jackie Chi Kit Cheung, Yejin Choi

Despite considerable advancements with deep neural language models (LMs), neural text generation still suffers from degeneration: the generated text is repetitive, generic, self-contradictory, and often lacks commonsense.

Language Modelling Text Generation

Analyzing Commonsense Emergence in Few-shot Knowledge Models

1 code implementation AKBC 2021 Jeff Da, Ronan Le Bras, Ximing Lu, Yejin Choi, Antoine Bosselut

Our results show that commonsense knowledge models can rapidly adapt from limited examples, indicating that KG fine-tuning serves to learn an interface to encoded knowledge learned during pretraining.

The Amazing World of Neural Language Generation

no code implementations EMNLP 2020 Yangfeng Ji, Antoine Bosselut, Thomas Wolf, Asli Celikyilmaz

Neural Language Generation (NLG) {--} using neural network models to generate coherent text {--} is among the most promising methods for automated text creation.

Language Modelling Text Generation +1

COMET-ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs

no code implementations12 Oct 2020 Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Jeff Da, Keisuke Sakaguchi, Antoine Bosselut, Yejin Choi

Next, we show that ATOMIC 2020 is better suited for training knowledge models that can generate accurate, representative knowledge for new, unseen entities and events.

Knowledge Graphs Natural Language Understanding

Back to the Future: Unsupervised Backprop-based Decoding for Counterfactual and Abductive Commonsense Reasoning

1 code implementation EMNLP 2020 Lianhui Qin, Vered Shwartz, Peter West, Chandra Bhagavatula, Jena Hwang, Ronan Le Bras, Antoine Bosselut, Yejin Choi

Abductive and counterfactual reasoning, core abilities of everyday human cognition, require reasoning about what might have happened at time t, while conditioning on multiple contexts from the relative past and future.

Text Infilling

Commonsense Reasoning for Natural Language Processing

no code implementations ACL 2020 Maarten Sap, Vered Shwartz, Antoine Bosselut, Yejin Choi, Dan Roth

We organize this tutorial to provide researchers with the critical foundations and recent advances in commonsense representation and reasoning, in the hopes of casting a brighter light on this promising area of future research.

Procedural Reading Comprehension with Attribute-Aware Context Flow

no code implementations AKBC 2020 Aida Amini, Antoine Bosselut, Bhavana Dalvi Mishra, Yejin Choi, Hannaneh Hajishirzi

Procedural texts often describe processes (e. g., photosynthesis and cooking) that happen over entities (e. g., light, food).

Reading Comprehension

Dynamic Neuro-Symbolic Knowledge Graph Construction for Zero-shot Commonsense Question Answering

no code implementations10 Nov 2019 Antoine Bosselut, Ronan Le Bras, Yejin Choi

Understanding narratives requires reasoning about implicit world knowledge related to the causes, effects, and states of situations described in text.

graph construction Knowledge Graphs +1

WIQA: A dataset for ``What if...'' reasoning over procedural text

no code implementations IJCNLP 2019 T, Niket on, Bhavana Dalvi, Keisuke Sakaguchi, Peter Clark, Antoine Bosselut

We introduce WIQA, the first large-scale dataset of {``}What if...{''} questions over procedural text.

Commonsense Knowledge Base Completion with Structural and Semantic Context

no code implementations7 Oct 2019 Chaitanya Malaviya, Chandra Bhagavatula, Antoine Bosselut, Yejin Choi

Our results demonstrate the effectiveness of language model representations in boosting link prediction performance and the advantages of learning from local graph structure (+1. 5 points in MRR for ConceptNet) when training on subgraphs for computational efficiency.

Knowledge Base Completion Knowledge Graphs +3

WIQA: A dataset for "What if..." reasoning over procedural text

1 code implementation10 Sep 2019 Niket Tandon, Bhavana Dalvi Mishra, Keisuke Sakaguchi, Antoine Bosselut, Peter Clark

We introduce WIQA, the first large-scale dataset of "What if..." questions over procedural text.

Everything Happens for a Reason: Discovering the Purpose of Actions in Procedural Text

no code implementations IJCNLP 2019 Bhavana Dalvi Mishra, Niket Tandon, Antoine Bosselut, Wen-tau Yih, Peter Clark

Our goal is to better comprehend procedural text, e. g., a paragraph about photosynthesis, by not only predicting what happens, but why some actions need to happen before others.

Reading Comprehension

Counterfactual Story Reasoning and Generation

1 code implementation IJCNLP 2019 Lianhui Qin, Antoine Bosselut, Ari Holtzman, Chandra Bhagavatula, Elizabeth Clark, Yejin Choi

Counterfactual reasoning requires predicting how alternative events, contrary to what actually happened, might have resulted in different outcomes.

Text Generation

Discourse Understanding and Factual Consistency in Abstractive Summarization

no code implementations EACL 2021 Saadia Gabriel, Antoine Bosselut, Jeff Da, Ari Holtzman, Jan Buys, Kyle Lo, Asli Celikyilmaz, Yejin Choi

We introduce a general framework for abstractive summarization with factual consistency and distinct modeling of the narrative flow in an output summary.

Abstractive Text Summarization

Be Consistent! Improving Procedural Text Comprehension using Label Consistency

1 code implementation NAACL 2019 Xinya Du, Bhavana Dalvi Mishra, Niket Tandon, Antoine Bosselut, Wen-tau Yih, Peter Clark, Claire Cardie

Our goal is procedural text comprehension, namely tracking how the properties of entities (e. g., their location) change with time given a procedural text (e. g., a paragraph about photosynthesis, a recipe).

Reading Comprehension

COMET: Commonsense Transformers for Automatic Knowledge Graph Construction

2 code implementations ACL 2019 Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, Yejin Choi

We present the first comprehensive study on automatic knowledge base construction for two prevalent commonsense knowledge graphs: ATOMIC (Sap et al., 2019) and ConceptNet (Speer et al., 2017).

graph construction Knowledge Graphs

Efficient Adaptation of Pretrained Transformers for Abstractive Summarization

2 code implementations1 Jun 2019 Andrew Hoang, Antoine Bosselut, Asli Celikyilmaz, Yejin Choi

Large-scale learning of transformer language models has yielded improvements on a variety of natural language understanding tasks.

Abstractive Text Summarization Natural Language Understanding

Reasoning about Actions and State Changes by Injecting Commonsense Knowledge

1 code implementation EMNLP 2018 Niket Tandon, Bhavana Dalvi Mishra, Joel Grus, Wen-tau Yih, Antoine Bosselut, Peter Clark

Comprehending procedural text, e. g., a paragraph describing photosynthesis, requires modeling actions and the state changes they produce, so that questions about entities at different timepoints can be answered.

Reading Comprehension Structured Prediction

Modeling Naive Psychology of Characters in Simple Commonsense Stories

no code implementations ACL 2018 Hannah Rashkin, Antoine Bosselut, Maarten Sap, Kevin Knight, Yejin Choi

Understanding a narrative requires reading between the lines and reasoning about the unspoken but obvious implications about events and people's mental states - a capability that is trivial for humans but remarkably hard for machines.

Emotion Classification

Learning to Write with Cooperative Discriminators

2 code implementations ACL 2018 Ari Holtzman, Jan Buys, Maxwell Forbes, Antoine Bosselut, David Golub, Yejin Choi

Recurrent Neural Networks (RNNs) are powerful autoregressive sequence models, but when used to generate natural language their output tends to be overly generic, repetitive, and self-contradictory.

Discourse-Aware Neural Rewards for Coherent Text Generation

no code implementations NAACL 2018 Antoine Bosselut, Asli Celikyilmaz, Xiaodong He, Jianfeng Gao, Po-Sen Huang, Yejin Choi

In this paper, we investigate the use of discourse-aware rewards with reinforcement learning to guide a model to generate long, coherent text.

Sentence Ordering Text Generation

Deep Communicating Agents for Abstractive Summarization

no code implementations NAACL 2018 Asli Celikyilmaz, Antoine Bosselut, Xiaodong He, Yejin Choi

We present deep communicating agents in an encoder-decoder architecture to address the challenges of representing a long document for abstractive summarization.

Ranked #18 on Abstractive Text Summarization on CNN / Daily Mail (using extra training data)

Abstractive Text Summarization

Learning to Write by Learning the Objective

no code implementations ICLR 2018 Ari Holtzman, Jan Buys, Maxwell Forbes, Antoine Bosselut, Yejin Choi

Human evaluation demonstrates that text generated by the resulting generator is preferred over that of baselines by a large margin and significantly enhances the overall coherence, style, and information content of the generated text.

Language Modelling

Simulating Action Dynamics with Neural Process Networks

no code implementations ICLR 2018 Antoine Bosselut, Omer Levy, Ari Holtzman, Corin Ennis, Dieter Fox, Yejin Choi

Understanding procedural language requires anticipating the causal effects of actions, even when they are not explicitly stated.

Cannot find the paper you are looking for? You can Submit a new open access paper.