Search Results for author: Maxwell Forbes

Found 17 papers, 9 papers with code

Is GPT-3 Text Indistinguishable from Human Text? Scarecrow: A Framework for Scrutinizing Machine Text

no code implementations ACL 2022 Yao Dou, Maxwell Forbes, Rik Koncel-Kedziorski, Noah A. Smith, Yejin Choi

To support the broad range of real machine errors that can be identified by laypeople, the ten error categories of Scarecrow -- such as redundancy, commonsense errors, and incoherence -- are identified through several rounds of crowd annotation experiments without a predefined ontology.

Math Text Generation

Social Chemistry 101: Learning to Reason about Social and Moral Norms

2 code implementations EMNLP 2020 Maxwell Forbes, Jena D. Hwang, Vered Shwartz, Maarten Sap, Yejin Choi

We present Social Chemistry, a new conceptual formalism to study people's everyday social norms and moral judgments over a rich spectrum of real life situations described in natural language.

Attribute

Paragraph-level Commonsense Transformers with Recurrent Memory

1 code implementation4 Oct 2020 Saadia Gabriel, Chandra Bhagavatula, Vered Shwartz, Ronan Le Bras, Maxwell Forbes, Yejin Choi

Human understanding of narrative texts requires making commonsense inferences beyond what is stated explicitly in the text.

Sentence World Knowledge

Neural Naturalist: Generating Fine-Grained Image Comparisons

no code implementations IJCNLP 2019 Maxwell Forbes, Christine Kaeser-Chen, Piyush Sharma, Serge Belongie

We introduce the new Birds-to-Words dataset of 41k sentences describing fine-grained differences between photographs of birds.

Do Neural Language Representations Learn Physical Commonsense?

1 code implementation8 Aug 2019 Maxwell Forbes, Ari Holtzman, Yejin Choi

Humans understand language based on the rich background knowledge about how the physical world works, which in turn allows us to reason about the physical world through language.

Natural Language Inference Physical Commonsense Reasoning

The Curious Case of Neural Text Degeneration

16 code implementations ICLR 2020 Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, Yejin Choi

Despite considerable advancements with deep neural language models, the enigma of neural text degeneration persists when these models are tested as text generators.

Language Modelling

Balancing Shared Autonomy with Human-Robot Communication

no code implementations20 May 2018 Rosario Scalise, Yonatan Bisk, Maxwell Forbes, Daqing Yi, Yejin Choi, Siddhartha Srinivasa

Robotic agents that share autonomy with a human should leverage human domain knowledge and account for their preferences when completing a task.

Learning to Write with Cooperative Discriminators

2 code implementations ACL 2018 Ari Holtzman, Jan Buys, Maxwell Forbes, Antoine Bosselut, David Golub, Yejin Choi

Recurrent Neural Networks (RNNs) are powerful autoregressive sequence models, but when used to generate natural language their output tends to be overly generic, repetitive, and self-contradictory.

Learning to Write by Learning the Objective

no code implementations ICLR 2018 Ari Holtzman, Jan Buys, Maxwell Forbes, Antoine Bosselut, Yejin Choi

Human evaluation demonstrates that text generated by the resulting generator is preferred over that of baselines by a large margin and significantly enhances the overall coherence, style, and information content of the generated text.

Language Modelling

Verb Physics: Relative Physical Knowledge of Actions and Objects

no code implementations ACL 2017 Maxwell Forbes, Yejin Choi

Learning commonsense knowledge from natural language text is nontrivial due to reporting bias: people rarely state the obvious, e. g., "My house is bigger than me."

Cannot find the paper you are looking for? You can Submit a new open access paper.