Search Results for author: Tom Kwiatkowski

Found 25 papers, 10 papers with code

Decontextualization: Making Sentences Stand-Alone

no code implementations9 Feb 2021 Eunsol Choi, Jennimaria Palomaki, Matthew Lamm, Tom Kwiatkowski, Dipanjan Das, Michael Collins

Models for question answering, dialogue agents, and summarization often interpret the meaning of a sentence in a rich context and use that meaning in a new context.

Question Answering

Empirical Evaluation of Pretraining Strategies for Supervised Entity Linking

no code implementations AKBC 2020 Thibault Févry, Nicholas FitzGerald, Livio Baldini Soares, Tom Kwiatkowski

In this work, we present an entity linking model which combines a Transformer architecture with large scale pretraining from Wikipedia links.

Entity Linking

Entities as Experts: Sparse Memory Access with Entity Supervision

1 code implementation EMNLP 2020 Thibault Févry, Livio Baldini Soares, Nicholas FitzGerald, Eunsol Choi, Tom Kwiatkowski

We introduce a new model - Entities as Experts (EAE) - that can access distinct memories of the entities mentioned in a piece of text.

Language Modelling TriviaQA

Learning Cross-Context Entity Representations from Text

no code implementations11 Jan 2020 Jeffrey Ling, Nicholas FitzGerald, Zifei Shan, Livio Baldini Soares, Thibault Févry, David Weiss, Tom Kwiatkowski

Language modeling tasks, in which words, or word-pieces, are predicted on the basis of a local context, have been very effective for learning word embeddings and context dependent representations of phrases.

Entity Linking Language Modelling +1

Real-Time Open-Domain Question Answering with Dense-Sparse Phrase Index

1 code implementation ACL 2019 Minjoon Seo, Jinhyuk Lee, Tom Kwiatkowski, Ankur P. Parikh, Ali Farhadi, Hannaneh Hajishirzi

Existing open-domain question answering (QA) models are not suitable for real-time usage because they need to process several long documents on-demand for every input query.

Open-Domain Question Answering

BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions

1 code implementation NAACL 2019 Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, Kristina Toutanova

In this paper we study yes/no questions that are naturally occurring --- meaning that they are generated in unprompted and unconstrained settings.

Reading Comprehension Transfer Learning

Learning Entity Representations for Few-Shot Reconstruction of Wikipedia Categories

no code implementations ICLR Workshop LLD 2019 Jeffrey Ling, Nicholas FitzGerald, Livio Baldini Soares, David Weiss, Tom Kwiatkowski

Language modeling tasks, in which words are predicted on the basis of a local context, have been very effective for learning word embeddings and context dependent representations of phrases.

Entity Typing Language Modelling +1

Incremental Reading for Question Answering

no code implementations15 Jan 2019 Samira Abnar, Tania Bedrax-Weiss, Tom Kwiatkowski, William W. Cohen

Current state-of-the-art question answering models reason over an entire passage, not incrementally.

Continual Learning Question Answering

Phrase-Indexed Question Answering: A New Challenge for Scalable Document Comprehension

1 code implementation EMNLP 2018 Minjoon Seo, Tom Kwiatkowski, Ankur P. Parikh, Ali Farhadi, Hannaneh Hajishirzi

We formalize a new modular variant of current question answering tasks by enforcing complete independence of the document encoder from the question encoder.

Question Answering Reading Comprehension +1

Multi-Mention Learning for Reading Comprehension with Neural Cascades

no code implementations ICLR 2018 Swabha Swayamdipta, Ankur P. Parikh, Tom Kwiatkowski

Reading comprehension is a challenging task, especially when executed across longer or across multiple evidence documents, where the answer is likely to reoccur.

Reading Comprehension TriviaQA

Learning Recurrent Span Representations for Extractive Question Answering

1 code implementation4 Nov 2016 Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant

In this paper, we focus on this answer extraction task, presenting a novel model architecture that efficiently builds fixed length representations of all spans in the evidence document with a recurrent network.

Answer Selection Extractive Question-Answering +2

Transforming Dependency Structures to Logical Forms for Semantic Parsing

1 code implementation TACL 2016 Siva Reddy, Oscar T{\"a}ckstr{\"o}m, Michael Collins, Tom Kwiatkowski, Dipanjan Das, Mark Steedman, Mirella Lapata

In contrast{---}partly due to the lack of a strong type system{---}dependency structures are easy to annotate and have become a widely used form of syntactic analysis for many languages.

Question Answering Semantic Parsing +1

Cannot find the paper you are looking for? You can Submit a new open access paper.