Search Results for author: Ming-Wei Chang

Found 54 papers, 21 papers with code

CapWAP: Image Captioning with a Purpose

no code implementations EMNLP 2020 Adam Fisch, Kenton Lee, Ming-Wei Chang, Jonathan Clark, Regina Barzilay

In this task, we use question-answer (QA) pairs{---}a natural expression of information need{---}from users, instead of reference captions, for both training and post-inference evaluation.

Image Captioning Question Answering +1

Retrieval Augmented Language Model Pre-Training

no code implementations ICML 2020 Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, Ming-Wei Chang

Language model pre-training has been shown to capture a surprising amount of world knowledge, crucial for NLP tasks such as question answering.

Language Modelling Masked Language Modeling +2

QUEST: A Retrieval Dataset of Entity-Seeking Queries with Implicit Set Operations

1 code implementation19 May 2023 Chaitanya Malaviya, Peter Shaw, Ming-Wei Chang, Kenton Lee, Kristina Toutanova

To study the ability of retrieval systems to meet such information needs, we construct QUEST, a dataset of 3357 natural language queries with implicit set operations, that map to a set of entities corresponding to Wikipedia documents.

Natural Language Queries Retrieval

Rethinking the Role of Token Retrieval in Multi-Vector Retrieval

no code implementations4 Apr 2023 Jinhyuk Lee, Zhuyun Dai, Sai Meher Karthik Duddu, Tao Lei, Iftekhar Naim, Ming-Wei Chang, Vincent Y. Zhao

Multi-vector retrieval models such as ColBERT [Khattab and Zaharia, 2020] allow token-level interactions between queries and documents, and hence achieve state of the art on many information retrieval benchmarks.

Information Retrieval Retrieval

Subject-driven Text-to-Image Generation via Apprenticeship Learning

no code implementations1 Apr 2023 Wenhu Chen, Hexiang Hu, Yandong Li, Nataniel Ruiz, Xuhui Jia, Ming-Wei Chang, William W. Cohen

We adopt these clusters to train a massive number of expert models, each specializing in a different subject.

Can Pre-trained Vision and Language Models Answer Visual Information-Seeking Questions?

2 code implementations23 Feb 2023 Yang Chen, Hexiang Hu, Yi Luan, Haitian Sun, Soravit Changpinyo, Alan Ritter, Ming-Wei Chang

Our analysis shows that it is challenging for the state-of-the-art multi-modal pre-trained models to answer visual information seeking questions, but this capability is improved through fine-tuning on the automated InfoSeek dataset.

Open-Domain Question Answering Visual Question Answering

Meta-Learning Fast Weight Language Models

no code implementations5 Dec 2022 Kevin Clark, Kelvin Guu, Ming-Wei Chang, Panupong Pasupat, Geoffrey Hinton, Mohammad Norouzi

Dynamic evaluation of language models (LMs) adapts model parameters at test time using gradient information from previous tokens and substantially improves LM performance.

Language Modelling Meta-Learning

Promptagator: Few-shot Dense Retrieval From 8 Examples

no code implementations23 Sep 2022 Zhuyun Dai, Vincent Y. Zhao, Ji Ma, Yi Luan, Jianmo Ni, Jing Lu, Anton Bakalov, Kelvin Guu, Keith B. Hall, Ming-Wei Chang

To amplify the power of a few examples, we propose Prompt-base Query Generation for Retriever (Promptagator), which leverages large language models (LLM) as a few-shot query generator, and creates task-specific retrievers based on the generated data.

Information Retrieval Natural Questions +1

ASQA: Factoid Questions Meet Long-Form Answers

no code implementations12 Apr 2022 Ivan Stelmakh, Yi Luan, Bhuwan Dhingra, Ming-Wei Chang

In contrast to existing long-form QA tasks (such as ELI5), ASQA admits a clear notion of correctness: a user faced with a good summary should be able to answer different interpretations of the original ambiguous question.

Question Answering

FRUIT: Faithfully Reflecting Updated Information in Text

no code implementations NAACL 2022 Robert L. Logan IV, Alexandre Passos, Sameer Singh, Ming-Wei Chang

Textual knowledge bases such as Wikipedia require considerable effort to keep up to date and consistent.

Large Dual Encoders Are Generalizable Retrievers

2 code implementations15 Dec 2021 Jianmo Ni, Chen Qu, Jing Lu, Zhuyun Dai, Gustavo Hernández Ábrego, Ji Ma, Vincent Y. Zhao, Yi Luan, Keith B. Hall, Ming-Wei Chang, Yinfei Yang

With multi-stage training, surprisingly, scaling up the model size brings significant improvement on a variety of retrieval tasks, especially for out-of-domain generalization.

Domain Generalization Retrieval +1

Revisiting the Primacy of English in Zero-shot Cross-lingual Transfer

no code implementations30 Jun 2021 Iulia Turc, Kenton Lee, Jacob Eisenstein, Ming-Wei Chang, Kristina Toutanova

Zero-shot cross-lingual transfer is emerging as a practical solution: pre-trained models later fine-tuned on one transfer language exhibit surprising performance when tested on many target languages.

Question Answering Zero-Shot Cross-Lingual Transfer

Joint Passage Ranking for Diverse Multi-Answer Retrieval

no code implementations EMNLP 2021 Sewon Min, Kenton Lee, Ming-Wei Chang, Kristina Toutanova, Hannaneh Hajishirzi

We study multi-answer retrieval, an under-explored problem that requires retrieving passages to cover multiple distinct answers for a given question.

Answer Generation Passage Ranking +3

Unlocking Compositional Generalization in Pre-trained Models Using Intermediate Representations

2 code implementations15 Apr 2021 Jonathan Herzig, Peter Shaw, Ming-Wei Chang, Kelvin Guu, Panupong Pasupat, Yuan Zhang

Sequence-to-sequence (seq2seq) models are prevalent in semantic parsing, but have been found to struggle at out-of-distribution compositional generalization.

Semantic Parsing Text-To-SQL

CapWAP: Captioning with a Purpose

1 code implementation9 Nov 2020 Adam Fisch, Kenton Lee, Ming-Wei Chang, Jonathan H. Clark, Regina Barzilay

In this task, we use question-answer (QA) pairs---a natural expression of information need---from users, instead of reference captions, for both training and post-inference evaluation.

Image Captioning Question Answering +1

Compositional Generalization and Natural Language Variation: Can a Semantic Parsing Approach Handle Both?

1 code implementation ACL 2021 Peter Shaw, Ming-Wei Chang, Panupong Pasupat, Kristina Toutanova

This has motivated new specialized architectures with stronger compositional biases, but most of these approaches have only been evaluated on synthetically-generated datasets, which are not representative of natural language variation.

Semantic Parsing

Open Question Answering over Tables and Text

1 code implementation ICLR 2021 Wenhu Chen, Ming-Wei Chang, Eva Schlinger, William Wang, William W. Cohen

In open question answering (QA), the answer to a question is produced by retrieving and then analyzing documents that might contain answers to the question.

Open-Ended Question Answering Retrieval

Exploring Unexplored Generalization Challenges for Cross-Database Semantic Parsing

no code implementations ACL 2020 Alane Suhr, Ming-Wei Chang, Peter Shaw, Kenton Lee

We study the task of cross-database semantic parsing (XSP), where a system that maps natural language utterances to executable SQL queries is evaluated on databases unseen during training.

Semantic Parsing

Probabilistic Assumptions Matter: Improved Models for Distantly-Supervised Document-Level Question Answering

1 code implementation ACL 2020 Hao Cheng, Ming-Wei Chang, Kenton Lee, Kristina Toutanova

We address the problem of extractive question answering using document-level distant super-vision, pairing questions and relevant documents with answer strings.

Extractive Question-Answering Question Answering +1

REALM: Retrieval-Augmented Language Model Pre-Training

6 code implementations10 Feb 2020 Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, Ming-Wei Chang

Language model pre-training has been shown to capture a surprising amount of world knowledge, crucial for NLP tasks such as question answering.

Language Modelling Masked Language Modeling +2

Well-Read Students Learn Better: On the Importance of Pre-training Compact Models

40 code implementations ICLR 2020 Iulia Turc, Ming-Wei Chang, Kenton Lee, Kristina Toutanova

Recent developments in natural language representations have been accompanied by large and expensive models that leverage vast amounts of general-domain text through self-supervised pre-training.

Knowledge Distillation Language Modelling +2

Zero-Shot Entity Linking by Reading Entity Descriptions

3 code implementations ACL 2019 Lajanugen Logeswaran, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, Honglak Lee

First, we show that strong reading comprehension models pre-trained on large unlabeled data can be used to generalize to unseen entities.

Entity Linking Reading Comprehension

Handling Divergent Reference Texts when Evaluating Table-to-Text Generation

1 code implementation ACL 2019 Bhuwan Dhingra, Manaal Faruqui, Ankur Parikh, Ming-Wei Chang, Dipanjan Das, William W. Cohen

Automatically constructed datasets for generating text from semi-structured data (tables), such as WikiBio, often contain reference texts that diverge from the information in the corresponding semi-structured data.

Table-to-Text Generation

Latent Retrieval for Weakly Supervised Open Domain Question Answering

3 code implementations ACL 2019 Kenton Lee, Ming-Wei Chang, Kristina Toutanova

We show for the first time that it is possible to jointly learn the retriever and reader from question-answer string pairs and without any IR system.

Information Retrieval Open-Domain Question Answering +1

BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions

1 code implementation NAACL 2019 Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, Kristina Toutanova

In this paper we study yes/no questions that are naturally occurring --- meaning that they are generated in unprompted and unconstrained settings.

Reading Comprehension Transfer Learning

Language Model Pre-training for Hierarchical Document Representations

no code implementations ICLR 2019 Ming-Wei Chang, Kristina Toutanova, Kenton Lee, Jacob Devlin

Hierarchical neural architectures are often used to capture long-distance dependencies and have been applied to many document-level tasks such as summarization, document segmentation, and sentiment analysis.

Document Summarization Extractive Document Summarization +4

Improving Span-based Question Answering Systems with Coarsely Labeled Data

no code implementations5 Nov 2018 Hao Cheng, Ming-Wei Chang, Kenton Lee, Ankur Parikh, Michael Collins, Kristina Toutanova

We study approaches to improve fine-grained short answer Question Answering models by integrating coarse-grained data annotated for paragraph-level relevance and show that coarsely annotated data can bring significant performance gains.

Multi-Task Learning Question Answering

Policy Shaping and Generalized Update Equations for Semantic Parsing from Denotations

no code implementations EMNLP 2018 Dipendra Misra, Ming-Wei Chang, Xiaodong He, Wen-tau Yih

Semantic parsing from denotations faces two key challenges in model training: (1) given only the denotations (e. g., answers), search for good candidate semantic parses, and (2) choose the best model update algorithm.

Question Answering Semantic Parsing

Maximum Margin Reward Networks for Learning from Explicit and Implicit Supervision

no code implementations EMNLP 2017 Haoruo Peng, Ming-Wei Chang, Wen-tau Yih

Neural networks have achieved state-of-the-art performance on several structured-output prediction tasks, trained in a fully supervised fashion.

Dependency Parsing named-entity-recognition +4

Search-based Neural Structured Learning for Sequential Question Answering

no code implementations ACL 2017 Mohit Iyyer, Wen-tau Yih, Ming-Wei Chang

Recent work in semantic parsing for question answering has focused on long and complicated questions, many of which would seem unnatural if asked in a normal conversation between two humans.

Question Answering Semantic Parsing

A Knowledge-Grounded Neural Conversation Model

2 code implementations7 Feb 2017 Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, Michel Galley

We generalize the widely-used Seq2Seq approach by conditioning responses on both conversation history and external "facts", allowing the model to be versatile and applicable in an open-domain setting.

Slot Filling

From Entity Linking to Question Answering -- Recent Progress on Semantic Grounding Tasks

no code implementations WS 2016 Ming-Wei Chang

Entity linking and semantic parsing have been shown to be crucial to important applications such as question answering and document understanding.

document understanding Entity Linking +2

Link Prediction using Embedded Knowledge Graphs

no code implementations14 Nov 2016 Yelong Shen, Po-Sen Huang, Ming-Wei Chang, Jianfeng Gao

Since large knowledge bases are typically incomplete, missing facts need to be inferred from observed facts in a task called knowledge base completion.

Knowledge Base Completion Knowledge Graphs +1

Answering Complicated Question Intents Expressed in Decomposed Question Sequences

no code implementations4 Nov 2016 Mohit Iyyer, Wen-tau Yih, Ming-Wei Chang

Recent work in semantic parsing for question answering has focused on long and complicated questions, many of which would seem unnatural if asked in a normal conversation between two humans.

Question Answering Semantic Parsing

S-MART: Novel Tree-based Structured Learning Algorithms Applied to Tweet Entity Linking

no code implementations IJCNLP 2015 Yi Yang, Ming-Wei Chang

Non-linear models recently receive a lot of attention as people are starting to discover the power of statistical and embedding features.

Entity Linking

Annotating Derivations: A New Evaluation Strategy and Dataset for Algebra Word Problems

1 code implementation EACL 2017 Shyam Upadhyay, Ming-Wei Chang

We propose a new evaluation for automatic solvers for algebra word problems, which can identify mistakes that existing evaluations overlook.

Inferring Missing Entity Type Instances for Knowledge Base Completion: New Dataset and Methods

no code implementations HLT 2015 Arvind Neelakantan, Ming-Wei Chang

In this work, we focus on the task of inferring missing entity type instances in a KB, a fundamental task for KB competition yet receives little attention.

Knowledge Base Completion Relation Extraction

Dual Coordinate Descent Algorithms for Efficient Large Margin Structured Prediction

no code implementations TACL 2013 Ming-Wei Chang, Wen-tau Yih

Due to the nature of complex NLP problems, structured prediction algorithms have been important modeling tools for a wide range of tasks.

Dependency Parsing Document Summarization +7

Cannot find the paper you are looking for? You can Submit a new open access paper.