Search Results for author: Matthew Lamm

Found 12 papers, 5 papers with code

Measuring Attribution in Natural Language Generation Models

no code implementations23 Dec 2021 Hannah Rashkin, Vitaly Nikolaev, Matthew Lamm, Michael Collins, Dipanjan Das, Slav Petrov, Gaurav Singh Tomar, Iulia Turc, David Reitter

With recent improvements in natural language generation (NLG) models for various applications, it has become imperative to have the means to identify and evaluate whether NLG output is only sharing verifiable information about the external world.

Text Generation

Retrieval-guided Counterfactual Generation for QA

no code implementations ACL 2022 Bhargavi Paranjape, Matthew Lamm, Ian Tenney

To address these challenges, we develop a Retrieve-Generate-Filter(RGF) technique to create counterfactual evaluation and training data with minimal human supervision.

Data Augmentation Question Answering +2

Decontextualization: Making Sentences Stand-Alone

no code implementations9 Feb 2021 Eunsol Choi, Jennimaria Palomaki, Matthew Lamm, Tom Kwiatkowski, Dipanjan Das, Michael Collins

Models for question answering, dialogue agents, and summarization often interpret the meaning of a sentence in a rich context and use that meaning in a new context.

Question Answering

QED: A Framework and Dataset for Explanations in Question Answering

1 code implementation8 Sep 2020 Matthew Lamm, Jennimaria Palomaki, Chris Alberti, Daniel Andor, Eunsol Choi, Livio Baldini Soares, Michael Collins

A question answering system that in addition to providing an answer provides an explanation of the reasoning that leads to that answer has potential advantages in terms of debuggability, extensibility and trust.

Explanation Generation Question Answering

Compositional Generalization in Image Captioning

1 code implementation CONLL 2019 Mitja Nikolaus, Mostafa Abdou, Matthew Lamm, Rahul Aralikatte, Desmond Elliott

Image captioning models are usually evaluated on their ability to describe a held-out set of images, not on their ability to generalize to unseen concepts.

Image Captioning

Ellipsis Resolution as Question Answering: An Evaluation

1 code implementation EACL 2021 Rahul Aralikatte, Matthew Lamm, Daniel Hardt, Anders Søgaard

Most, if not all forms of ellipsis (e. g., so does Mary) are similar to reading comprehension questions (what does Mary do), in that in order to resolve them, we need to identify an appropriate text span in the preceding discourse.

Coreference Resolution Machine Reading Comprehension +2

Textual Analogy Parsing: What's Shared and What's Compared among Analogous Facts

2 code implementations EMNLP 2018 Matthew Lamm, Arun Tejasvi Chaganty, Christopher D. Manning, Dan Jurafsky, Percy Liang

To understand a sentence like "whereas only 10% of White Americans live at or below the poverty line, 28% of African Americans do" it is important not only to identify individual facts, e. g., poverty rates of distinct demographic groups, but also the higher-order relations between them, e. g., the disparity between them.

Frame Textual Analogy Parsing

Learning a SAT Solver from Single-Bit Supervision

5 code implementations ICLR 2019 Daniel Selsam, Matthew Lamm, Benedikt Bünz, Percy Liang, Leonardo de Moura, David L. Dill

We present NeuroSAT, a message passing neural network that learns to solve SAT problems after only being trained as a classifier to predict satisfiability.

The Pragmatics of Indirect Commands in Collaborative Discourse

no code implementations WS 2017 Matthew Lamm, Mihail Eric

We focus on a less understood family of utterances for eliciting agent action, locatives like \emph{The chair is in the other room}, and demonstrate how these utterances indirectly command in specific game state contexts.

Graph Neural Networks and Boolean Satisfiability

no code implementations12 Feb 2017 Benedikt Bünz, Matthew Lamm

In a weakly-supervised setting, that is, without problem specific feature engineering, Graph Neural Networks can learn features of satisfiability.

Feature Engineering

Cannot find the paper you are looking for? You can Submit a new open access paper.