Search Results for author: Ming-Wei Chang

Found 45 papers, 15 papers with code

CapWAP: Image Captioning with a Purpose

no code implementations EMNLP 2020 Adam Fisch, Kenton Lee, Ming-Wei Chang, Jonathan Clark, Regina Barzilay

In this task, we use question-answer (QA) pairs{---}a natural expression of information need{---}from users, instead of reference captions, for both training and post-inference evaluation.

Image Captioning Question Answering +1

Retrieval Augmented Language Model Pre-Training

no code implementations ICML 2020 Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, Ming-Wei Chang

Language model pre-training has been shown to capture a surprising amount of world knowledge, crucial for NLP tasks such as question answering.

Language Modelling Masked Language Modeling +1

ASQA: Factoid Questions Meet Long-Form Answers

no code implementations12 Apr 2022 Ivan Stelmakh, Yi Luan, Bhuwan Dhingra, Ming-Wei Chang

In contrast to existing long-form QA tasks (such as ELI5), ASQA admits a clear notion of correctness: a user faced with a good summary should be able to answer different interpretations of the original ambiguous question.

Question Answering

FRUIT: Faithfully Reflecting Updated Information in Text

no code implementations16 Dec 2021 Robert L. Logan IV, Alexandre Passos, Sameer Singh, Ming-Wei Chang

Textual knowledge bases such as Wikipedia require considerable effort to keep up to date and consistent.

Large Dual Encoders Are Generalizable Retrievers

no code implementations15 Dec 2021 Jianmo Ni, Chen Qu, Jing Lu, Zhuyun Dai, Gustavo Hernández Ábrego, Ji Ma, Vincent Y. Zhao, Yi Luan, Keith B. Hall, Ming-Wei Chang, Yinfei Yang

With multi-stage training, surprisingly, scaling up the model size brings significant improvement on a variety of retrieval tasks, especially for out-of-domain generalization.

Domain Generalization

Revisiting the Primacy of English in Zero-shot Cross-lingual Transfer

no code implementations30 Jun 2021 Iulia Turc, Kenton Lee, Jacob Eisenstein, Ming-Wei Chang, Kristina Toutanova

Zero-shot cross-lingual transfer is emerging as a practical solution: pre-trained models later fine-tuned on one transfer language exhibit surprising performance when tested on many target languages.

Question Answering Zero-Shot Cross-Lingual Transfer

Joint Passage Ranking for Diverse Multi-Answer Retrieval

no code implementations EMNLP 2021 Sewon Min, Kenton Lee, Ming-Wei Chang, Kristina Toutanova, Hannaneh Hajishirzi

We study multi-answer retrieval, an under-explored problem that requires retrieving passages to cover multiple distinct answers for a given question.

Answer Generation Passage Ranking +2

Unlocking Compositional Generalization in Pre-trained Models Using Intermediate Representations

1 code implementation15 Apr 2021 Jonathan Herzig, Peter Shaw, Ming-Wei Chang, Kelvin Guu, Panupong Pasupat, Yuan Zhang

Sequence-to-sequence (seq2seq) models are prevalent in semantic parsing, but have been found to struggle at out-of-distribution compositional generalization.

Semantic Parsing Text-To-Sql

CapWAP: Captioning with a Purpose

1 code implementation9 Nov 2020 Adam Fisch, Kenton Lee, Ming-Wei Chang, Jonathan H. Clark, Regina Barzilay

In this task, we use question-answer (QA) pairs---a natural expression of information need---from users, instead of reference captions, for both training and post-inference evaluation.

Image Captioning Question Answering +1

Compositional Generalization and Natural Language Variation: Can a Semantic Parsing Approach Handle Both?

1 code implementation ACL 2021 Peter Shaw, Ming-Wei Chang, Panupong Pasupat, Kristina Toutanova

This has motivated new specialized architectures with stronger compositional biases, but most of these approaches have only been evaluated on synthetically-generated datasets, which are not representative of natural language variation.

Semantic Parsing

Open Question Answering over Tables and Text

1 code implementation ICLR 2021 Wenhu Chen, Ming-Wei Chang, Eva Schlinger, William Wang, William W. Cohen

In open question answering (QA), the answer to a question is produced by retrieving and then analyzing documents that might contain answers to the question.

Question Answering

Exploring Unexplored Generalization Challenges for Cross-Database Semantic Parsing

no code implementations ACL 2020 Alane Suhr, Ming-Wei Chang, Peter Shaw, Kenton Lee

We study the task of cross-database semantic parsing (XSP), where a system that maps natural language utterances to executable SQL queries is evaluated on databases unseen during training.

Semantic Parsing

Probabilistic Assumptions Matter: Improved Models for Distantly-Supervised Document-Level Question Answering

1 code implementation ACL 2020 Hao Cheng, Ming-Wei Chang, Kenton Lee, Kristina Toutanova

We address the problem of extractive question answering using document-level distant super-vision, pairing questions and relevant documents with answer strings.

Question Answering

REALM: Retrieval-Augmented Language Model Pre-Training

5 code implementations10 Feb 2020 Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, Ming-Wei Chang

Language model pre-training has been shown to capture a surprising amount of world knowledge, crucial for NLP tasks such as question answering.

Language Modelling Masked Language Modeling +1

Well-Read Students Learn Better: On the Importance of Pre-training Compact Models

41 code implementations ICLR 2020 Iulia Turc, Ming-Wei Chang, Kenton Lee, Kristina Toutanova

Recent developments in natural language representations have been accompanied by large and expensive models that leverage vast amounts of general-domain text through self-supervised pre-training.

Knowledge Distillation Language Modelling +2

Zero-Shot Entity Linking by Reading Entity Descriptions

3 code implementations ACL 2019 Lajanugen Logeswaran, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, Honglak Lee

First, we show that strong reading comprehension models pre-trained on large unlabeled data can be used to generalize to unseen entities.

Entity Linking Reading Comprehension

Handling Divergent Reference Texts when Evaluating Table-to-Text Generation

1 code implementation ACL 2019 Bhuwan Dhingra, Manaal Faruqui, Ankur Parikh, Ming-Wei Chang, Dipanjan Das, William W. Cohen

Automatically constructed datasets for generating text from semi-structured data (tables), such as WikiBio, often contain reference texts that diverge from the information in the corresponding semi-structured data.

Table-to-Text Generation

Latent Retrieval for Weakly Supervised Open Domain Question Answering

2 code implementations ACL 2019 Kenton Lee, Ming-Wei Chang, Kristina Toutanova

We show for the first time that it is possible to jointly learn the retriever and reader from question-answer string pairs and without any IR system.

Information Retrieval Open-Domain Question Answering

BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions

1 code implementation NAACL 2019 Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, Kristina Toutanova

In this paper we study yes/no questions that are naturally occurring --- meaning that they are generated in unprompted and unconstrained settings.

Reading Comprehension Transfer Learning

Language Model Pre-training for Hierarchical Document Representations

no code implementations ICLR 2019 Ming-Wei Chang, Kristina Toutanova, Kenton Lee, Jacob Devlin

Hierarchical neural architectures are often used to capture long-distance dependencies and have been applied to many document-level tasks such as summarization, document segmentation, and sentiment analysis.

Document Summarization Extractive Document Summarization +4

Improving Span-based Question Answering Systems with Coarsely Labeled Data

no code implementations5 Nov 2018 Hao Cheng, Ming-Wei Chang, Kenton Lee, Ankur Parikh, Michael Collins, Kristina Toutanova

We study approaches to improve fine-grained short answer Question Answering models by integrating coarse-grained data annotated for paragraph-level relevance and show that coarsely annotated data can bring significant performance gains.

Multi-Task Learning Question Answering

Policy Shaping and Generalized Update Equations for Semantic Parsing from Denotations

no code implementations EMNLP 2018 Dipendra Misra, Ming-Wei Chang, Xiaodong He, Wen-tau Yih

Semantic parsing from denotations faces two key challenges in model training: (1) given only the denotations (e. g., answers), search for good candidate semantic parses, and (2) choose the best model update algorithm.

Question Answering Semantic Parsing

Maximum Margin Reward Networks for Learning from Explicit and Implicit Supervision

no code implementations EMNLP 2017 Haoruo Peng, Ming-Wei Chang, Wen-tau Yih

Neural networks have achieved state-of-the-art performance on several structured-output prediction tasks, trained in a fully supervised fashion.

Dependency Parsing Named Entity Recognition +2

Search-based Neural Structured Learning for Sequential Question Answering

no code implementations ACL 2017 Mohit Iyyer, Wen-tau Yih, Ming-Wei Chang

Recent work in semantic parsing for question answering has focused on long and complicated questions, many of which would seem unnatural if asked in a normal conversation between two humans.

Question Answering Semantic Parsing

A Knowledge-Grounded Neural Conversation Model

2 code implementations7 Feb 2017 Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, Michel Galley

We generalize the widely-used Seq2Seq approach by conditioning responses on both conversation history and external "facts", allowing the model to be versatile and applicable in an open-domain setting.

Slot Filling

From Entity Linking to Question Answering -- Recent Progress on Semantic Grounding Tasks

no code implementations WS 2016 Ming-Wei Chang

Entity linking and semantic parsing have been shown to be crucial to important applications such as question answering and document understanding.

Entity Linking Knowledge Base Question Answering +1

Link Prediction using Embedded Knowledge Graphs

no code implementations14 Nov 2016 Yelong Shen, Po-Sen Huang, Ming-Wei Chang, Jianfeng Gao

Since large knowledge bases are typically incomplete, missing facts need to be inferred from observed facts in a task called knowledge base completion.

Knowledge Base Completion Knowledge Graphs +1

Answering Complicated Question Intents Expressed in Decomposed Question Sequences

no code implementations4 Nov 2016 Mohit Iyyer, Wen-tau Yih, Ming-Wei Chang

Recent work in semantic parsing for question answering has focused on long and complicated questions, many of which would seem unnatural if asked in a normal conversation between two humans.

Question Answering Semantic Parsing

S-MART: Novel Tree-based Structured Learning Algorithms Applied to Tweet Entity Linking

no code implementations IJCNLP 2015 Yi Yang, Ming-Wei Chang

Non-linear models recently receive a lot of attention as people are starting to discover the power of statistical and embedding features.

Entity Linking

Annotating Derivations: A New Evaluation Strategy and Dataset for Algebra Word Problems

no code implementations EACL 2017 Shyam Upadhyay, Ming-Wei Chang

We propose a new evaluation for automatic solvers for algebra word problems, which can identify mistakes that existing evaluations overlook.

Inferring Missing Entity Type Instances for Knowledge Base Completion: New Dataset and Methods

no code implementations HLT 2015 Arvind Neelakantan, Ming-Wei Chang

In this work, we focus on the task of inferring missing entity type instances in a KB, a fundamental task for KB competition yet receives little attention.

Knowledge Base Completion Relation Extraction

Dual Coordinate Descent Algorithms for Efficient Large Margin Structured Prediction

no code implementations TACL 2013 Ming-Wei Chang, Wen-tau Yih

Due to the nature of complex NLP problems, structured prediction algorithms have been important modeling tools for a wide range of tasks.

Dependency Parsing Document Summarization +6

Cannot find the paper you are looking for? You can Submit a new open access paper.