Search Results for author: Matt Gardner

Found 65 papers, 25 papers with code

COVR: A test-bed for Visually Grounded Compositional Generalization with real images

1 code implementation EMNLP 2021 Ben Bogin, Shivanshu Gupta, Matt Gardner, Jonathan Berant

Due to the automatic generation process, COVR facilitates the creation of compositional splits, where models at test time need to generalize to new concepts and compositions in a zero- or few-shot setting.

QA Dataset Explosion: A Taxonomy of NLP Resources for Question Answering and Reading Comprehension

no code implementations27 Jul 2021 Anna Rogers, Matt Gardner, Isabelle Augenstein

Question answering and reading comprehension have been particularly prolific in this regard, with over 80 new datasets appearing in the past two years.

Question Answering Reading Comprehension

Tailor: Generating and Perturbing Text with Semantic Controls

no code implementations15 Jul 2021 Alexis Ross, Tongshuang Wu, Hao Peng, Matthew E. Peters, Matt Gardner

Making controlled perturbations is essential for various tasks (e. g., data augmentation), but building task-specific generators can be expensive.

Data Augmentation Style Transfer

Enforcing Consistency in Weakly Supervised Semantic Parsing

1 code implementation ACL 2021 Nitish Gupta, Sameer Singh, Matt Gardner

The predominant challenge in weakly supervised semantic parsing is that of spurious programs that evaluate to correct answers for the wrong reasons.

Semantic Parsing Visual Reasoning

Learning with Instance Bundles for Reading Comprehension

no code implementations EMNLP 2021 Dheeru Dua, Pradeep Dasigi, Sameer Singh, Matt Gardner

When training most modern reading comprehension models, all the questions associated with a context are treated as being independent from each other.

Reading Comprehension

Competency Problems: On Finding and Removing Artifacts in Language Data

no code implementations EMNLP 2021 Matt Gardner, William Merrill, Jesse Dodge, Matthew E. Peters, Alexis Ross, Sameer Singh, Noah A. Smith

In this work we argue that for complex language understanding tasks, all simple feature correlations are spurious, and we formalize this notion into a class of problems which we call competency problems.

Language understanding

Paired Examples as Indirect Supervision in Latent Decision Models

no code implementations EMNLP 2021 Nitish Gupta, Sameer Singh, Matt Gardner, Dan Roth

Such an objective does not require external supervision for the values of the latent output, or even the end task, yet provides an additional training signal to that provided by individual training examples themselves.

Question Answering Question Generation

Mitigating False-Negative Contexts in Multi-document Question Answering with Retrieval Marginalization

no code implementations EMNLP 2021 Ansong Ni, Matt Gardner, Pradeep Dasigi

We also show that retrieval marginalization results in 4. 1 QA F1 improvement over a non-marginalized baseline on HotpotQA in the fullwiki setting.

Question Answering

IIRC: A Dataset of Incomplete Information Reading Comprehension Questions

no code implementations EMNLP 2020 James Ferguson, Matt Gardner, Hannaneh Hajishirzi, Tushar Khot, Pradeep Dasigi

However, most existing reading comprehension (RC) tasks only focus on questions for which the contexts provide all the information required to answer them, thus not evaluating a system's performance at identifying a potential lack of sufficient information and locating sources for that information.

Reading Comprehension

Interpreting Predictions of NLP Models

no code implementations EMNLP 2020 Eric Wallace, Matt Gardner, Sameer Singh

Although neural NLP models are highly expressive and empirically successful, they also systematically fail in counterintuitive ways and are opaque in their decision-making process.

Decision Making

MOCHA: A Dataset for Training and Evaluating Generative Reading Comprehension Metrics

1 code implementation EMNLP 2020 Anthony Chen, Gabriel Stanovsky, Sameer Singh, Matt Gardner

Posing reading comprehension as a generation problem provides a great deal of flexibility, allowing for open-ended questions with few restrictions on possible answers.

Question Answering Reading Comprehension

Evaluating NLP Models via Contrast Sets

no code implementations1 Oct 2020 Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hanna Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, A. Zhang, Ben Zhou

Unfortunately, when a dataset has systematic gaps (e. g., annotation artifacts), these evaluations are misleading: a model can learn simple decision rules that perform well on the test set but do not capture a dataset's intended capabilities.

Reading Comprehension Sentiment Analysis

Understanding Mention Detector-Linker Interaction in Neural Coreference Resolution

no code implementations CRAC (ACL) 2021 Zhaofeng Wu, Matt Gardner

Despite significant recent progress in coreference resolution, the quality of current state-of-the-art systems still considerably trails behind human-level performance.

Coreference Resolution Natural Language Understanding

Dynamic Sampling Strategies for Multi-Task Reading Comprehension

no code implementations ACL 2020 Ananth Gottumukkala, Dheeru Dua, Sameer Singh, Matt Gardner

Building general reading comprehension systems, capable of solving multiple datasets at the same time, is a recent aspirational goal in the research community.

Multi-Task Learning Reading Comprehension

On Importance Sampling-Based Evaluation of Latent Language Models

no code implementations ACL 2020 Robert L. Logan IV, Matt Gardner, Sameer Singh

In addition, we elucidate subtle differences in how importance sampling is applied in these works that can have substantial effects on the final estimates, as well as provide theoretical results which reinforce the validity of this technique.

Benefits of Intermediate Annotations in Reading Comprehension

no code implementations ACL 2020 Dheeru Dua, Sameer Singh, Matt Gardner

Complex compositional reading comprehension datasets require performing latent sequential decisions that are learned via supervision from the final answer.

Reading Comprehension

Latent Compositional Representations Improve Systematic Generalization in Grounded Question Answering

1 code implementation1 Jul 2020 Ben Bogin, Sanjay Subramanian, Matt Gardner, Jonathan Berant

However, state-of-the-art models in grounded question answering often do not explicitly perform decomposition, leading to difficulties in generalization to out-of-distribution examples.

Question Answering Systematic Generalization

Obtaining Faithful Interpretations from Compositional Neural Networks

1 code implementation ACL 2020 Sanjay Subramanian, Ben Bogin, Nitish Gupta, Tomer Wolfson, Sameer Singh, Jonathan Berant, Matt Gardner

Neural module networks (NMNs) are a popular approach for modeling compositionality: they achieve high accuracy when applied to problems in language and vision, while reflecting the compositional structure of the problem in the network architecture.

TORQUE: A Reading Comprehension Dataset of Temporal Ordering Questions

no code implementations EMNLP 2020 Qiang Ning, Hao Wu, Rujun Han, Nanyun Peng, Matt Gardner, Dan Roth

A critical part of reading is being able to understand the temporal relationships between events described in a passage of text, even when those relationships are not explicitly stated.

Machine Reading Comprehension

Multi-Step Inference for Reasoning Over Paragraphs

no code implementations EMNLP 2020 Jiangming Liu, Matt Gardner, Shay B. Cohen, Mirella Lapata

Complex reasoning over text requires understanding and chaining together free-form predicates and logical connectives.

ORB: An Open Reading Benchmark for Comprehensive Evaluation of Machine Reading Comprehension

no code implementations29 Dec 2019 Dheeru Dua, Ananth Gottumukkala, Alon Talmor, Sameer Singh, Matt Gardner

A lot of diverse reading comprehension datasets have recently been introduced to study various phenomena in natural language, ranging from simple paraphrase matching and entity typing to entity tracking and understanding the implications of the context.

Entity Typing Language understanding +3

Neural Module Networks for Reasoning over Text

2 code implementations ICLR 2020 Nitish Gupta, Kevin Lin, Dan Roth, Sameer Singh, Matt Gardner

Answering compositional questions that require multiple steps of reasoning against text is challenging, especially when they involve discrete, symbolic operations.

On Making Reading Comprehension More Comprehensive

no code implementations WS 2019 Matt Gardner, Jonathan Berant, Hannaneh Hajishirzi, Alon Talmor, Sewon Min

In this work, we justify a question answering approach to reading comprehension and describe the various kinds of questions one might use to more fully test a system{'}s comprehension of a passage, moving beyond questions that only probe local predicate-argument structures.

Machine Reading Comprehension Question Answering

Evaluating Question Answering Evaluation

no code implementations WS 2019 Anthony Chen, Gabriel Stanovsky, Sameer Singh, Matt Gardner

Our study suggests that while current metrics may be suitable for existing QA datasets, they limit the complexity of QA datasets that can be created.

Question Answering

Comprehensive Multi-Dataset Evaluation of Reading Comprehension

no code implementations WS 2019 Dheeru Dua, Ananth Gottumukkala, Alon Talmor, Sameer Singh, Matt Gardner

A lot of diverse reading comprehension datasets have recently been introduced to study various phenomena in natural language, ranging from simple paraphrase matching and entity typing to entity tracking and understanding the implications of the context.

Entity Typing Language understanding +3

Question Answering is a Format; When is it Useful?

no code implementations25 Sep 2019 Matt Gardner, Jonathan Berant, Hannaneh Hajishirzi, Alon Talmor, Sewon Min

In this opinion piece, we argue that question answering should be considered a format which is sometimes useful for studying particular phenomena, not a phenomenon or task in itself.

Machine Translation Question Answering +4

AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models

1 code implementation IJCNLP 2019 Eric Wallace, Jens Tuyls, Junlin Wang, Sanjay Subramanian, Matt Gardner, Sameer Singh

Neural NLP models are increasingly accurate but are imperfect and opaque---they break in counterintuitive ways and leave end users puzzled at their behavior.

Language Modelling Reading Comprehension

Do NLP Models Know Numbers? Probing Numeracy in Embeddings

1 code implementation IJCNLP 2019 Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh, Matt Gardner

The ability to understand and work with numbers (numeracy) is critical for many complex reasoning tasks.

Question Answering

QuaRTz: An Open-Domain Dataset of Qualitative Relationship Questions

no code implementations IJCNLP 2019 Oyvind Tafjord, Matt Gardner, Kevin Lin, Peter Clark

QuaRTz contains general qualitative statements, e. g., "A sunscreen with a higher SPF protects the skin longer.

Universal Adversarial Triggers for Attacking and Analyzing NLP

1 code implementation IJCNLP 2019 Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, Sameer Singh

We define universal adversarial triggers: input-agnostic sequences of tokens that trigger a model to produce a specific prediction when concatenated to any input from a dataset.

Language Modelling Reading Comprehension

Reasoning Over Paragraph Effects in Situations

no code implementations WS 2019 Kevin Lin, Oyvind Tafjord, Peter Clark, Matt Gardner

A system is presented a background passage containing at least one of these relations, a novel situation that uses this background, and questions that require reasoning about effects of the relationships in the background passage in the context of the situation.

Reading Comprehension

Iterative Search for Weakly Supervised Semantic Parsing

no code implementations NAACL 2019 Pradeep Dasigi, Matt Gardner, Shikhar Murty, Luke Zettlemoyer, Eduard Hovy

Training semantic parsers from question-answer pairs typically involves searching over an exponentially large space of logical forms, and an unguided search can easily be misled by spurious logical forms that coincidentally evaluate to the correct answer.

Semantic Parsing Visual Reasoning

Grammar-based Neural Text-to-SQL Generation

no code implementations30 May 2019 Kevin Lin, Ben Bogin, Mark Neumann, Jonathan Berant, Matt Gardner

The sequence-to-sequence paradigm employed by neural text-to-SQL models typically performs token-level decoding and does not consider generating SQL hierarchically from a grammar.

Semantic Parsing Text-To-Sql

Representing Schema Structure with Graph Neural Networks for Text-to-SQL Parsing

1 code implementation ACL 2019 Ben Bogin, Matt Gardner, Jonathan Berant

Research on parsing language to SQL has largely ignored the structure of the database (DB) schema, either because the DB was very simple, or because it was observed at both training and test time.

SQL Parsing Text-To-Sql

Linguistic Knowledge and Transferability of Contextual Representations

no code implementations NAACL 2019 Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, Noah A. Smith

Contextual word representations derived from large-scale neural language models are successful across a diverse set of NLP tasks, suggesting that they encode useful and transferable features of language.

Language Modelling

QuaRel: A Dataset and Models for Answering Questions about Qualitative Relationships

no code implementations20 Nov 2018 Oyvind Tafjord, Peter Clark, Matt Gardner, Wen-tau Yih, Ashish Sabharwal

Many natural language questions require recognizing and reasoning with qualitative relationships (e. g., in science, economics, and medicine), but are challenging to answer with corpus-based methods.

Semantic Parsing

Structured Alignment Networks for Matching Sentences

no code implementations EMNLP 2018 Yang Liu, Matt Gardner, Mirella Lapata

We evaluate this model on two tasks, natural entailment detection and answer sentence selection, and find that modeling latent tree structures results in superior performance.

Natural Language Inference Question Answering

Neural Semantic Parsing

no code implementations ACL 2018 Matt Gardner, Pradeep Dasigi, Srinivasan Iyer, Alane Suhr, Luke Zettlemoyer

Semantic parsing, the study of translating natural language utterances into machine-executable programs, is a well-established research area and has applications in question answering, instruction following, voice assistants, and code generation.

Code Generation Machine Translation +3

Deep contextualized word representations

44 code implementations NAACL 2018 Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer

We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e. g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i. e., to model polysemy).

Ranked #2 on Citation Intent Classification on ACL-ARC (using extra training data)

Citation Intent Classification Conversational Response Selection +7


no code implementations ICLR 2018 Yang Liu, Matt Gardner

Using a structured attention mechanism, our model matches possible spans in the first sentence to possible spans in the second sentence, simultaneously discovering the tree structure of each sentence and performing a comparison, in a model that is fully differentiable and is trained only on the comparison objective.

Natural Language Inference

Simple and Effective Multi-Paragraph Reading Comprehension

1 code implementation ACL 2018 Christopher Clark, Matt Gardner

We consider the problem of adapting neural paragraph-level question answering models to the case where entire documents are given as input.

Question Answering Reading Comprehension

Crowdsourcing Multiple Choice Science Questions

no code implementations WS 2017 Johannes Welbl, Nelson F. Liu, Matt Gardner

With this method we have assembled SciQ, a dataset of 13. 7K multiple choice science exam questions (Dataset available at http://allenai. org/data. html).

Question Generation

Open-Vocabulary Semantic Parsing with both Distributional Statistics and Formal Knowledge

1 code implementation12 Jul 2016 Matt Gardner, Jayant Krishnamurthy

However, all prior approaches to open vocabulary semantic parsing replace a formal KB with textual information, making no use of the KB in their models.

Question Answering Semantic Parsing

Cannot find the paper you are looking for? You can Submit a new open access paper.