Search Results for author: Michael Zeng

Found 41 papers, 13 papers with code

Modeling Entity Knowledge for Fact Verification

no code implementations EMNLP (FEVER) 2021 Yang Liu, Chenguang Zhu, Michael Zeng

Fact verification is a challenging task of identifying the truthfulness of given claims based on the retrieval of relevant evidence texts.

Fact Verification

CLIP-Event: Connecting Text and Images with Event Structures

no code implementations13 Jan 2022 Manling Li, Ruochen Xu, Shuohang Wang, Luowei Zhou, Xudong Lin, Chenguang Zhu, Michael Zeng, Heng Ji, Shih-Fu Chang

Vision-language (V+L) pretraining models have achieved great success in supporting multimedia applications by understanding the alignments between images and text.

Contrastive Learning Event Extraction +1

MLP Architectures for Vision-and-Language Modeling: An Empirical Study

1 code implementation8 Dec 2021 Yixin Nie, Linjie Li, Zhe Gan, Shuohang Wang, Chenguang Zhu, Michael Zeng, Zicheng Liu, Mohit Bansal, Lijuan Wang

Based on this, we ask an even bolder question: can we have an all-MLP architecture for VL modeling, where both VL fusion and the vision encoder are replaced with MLPs?

Language Modelling Visual Question Answering

Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention

no code implementations6 Dec 2021 Yichong Xu, Chenguang Zhu, Shuohang Wang, Siqi Sun, Hao Cheng, Xiaodong Liu, Jianfeng Gao, Pengcheng He, Michael Zeng, Xuedong Huang

In particular, we focus on the task of Commonsense Reasoning, demonstrating that the proposed external attention mechanism can augment existing transformer models and significantly improve the model's reasoning capabilities.

Florence: A New Foundation Model for Computer Vision

no code implementations22 Nov 2021 Lu Yuan, Dongdong Chen, Yi-Ling Chen, Noel Codella, Xiyang Dai, Jianfeng Gao, Houdong Hu, Xuedong Huang, Boxin Li, Chunyuan Li, Ce Liu, Mengchen Liu, Zicheng Liu, Yumao Lu, Yu Shi, Lijuan Wang, JianFeng Wang, Bin Xiao, Zhen Xiao, Jianwei Yang, Michael Zeng, Luowei Zhou, Pengchuan Zhang

Computer vision foundation models, which are trained on diverse, large-scale dataset and can be adapted to a wide range of downstream tasks, are critical for this mission to solve real-world computer vision applications.

Action Classification Action Recognition +9

Leveraging Knowledge in Multilingual Commonsense Reasoning

no code implementations16 Oct 2021 Yuwei Fang, Shuohang Wang, Yichong Xu, Ruochen Xu, Siqi Sun, Chenguang Zhu, Michael Zeng

Then we utilize a diverse of 4 English knowledge sources to provide more comprehensive coverage of knowledge in different formats.

Language Modelling Translation

End-to-End Segmentation-based News Summarization

no code implementations15 Oct 2021 Yang Liu, Chenguang Zhu, Michael Zeng

In this paper, we bring a new way of digesting news content by introducing the task of segmenting a news article into multiple sections and generating the corresponding summary to each section.

Text Generation

Dict-BERT: Enhancing Language Model Pre-training with Dictionary

no code implementations13 Oct 2021 Wenhao Yu, Chenguang Zhu, Yuwei Fang, Donghan Yu, Shuohang Wang, Yichong Xu, Michael Zeng, Meng Jiang

In addition to training with the masked language modeling objective, we propose two novel self-supervised pre-training tasks on word and sentence-level alignment between input text sequence and rare word definitions to enhance language modeling representation with dictionary.

Language Modelling

KG-FiD: Infusing Knowledge Graph in Fusion-in-Decoder for Open-Domain Question Answering

no code implementations8 Oct 2021 Donghan Yu, Chenguang Zhu, Yuwei Fang, Wenhao Yu, Shuohang Wang, Yichong Xu, Xiang Ren, Yiming Yang, Michael Zeng

The recent proposed Fusion-in-Decoder (FiD), which is built on top of the pretrained generative model T5, achieves the state-of-the-art performance in the reading module.

Open-Domain Question Answering Passage Retrieval

DialogLM: Pre-trained Model for Long Dialogue Understanding and Summarization

1 code implementation6 Sep 2021 Ming Zhong, Yang Liu, Yichong Xu, Chenguang Zhu, Michael Zeng

For a dialogue, it corrupts a window of text with dialogue-inspired noise, and guides the model to reconstruct this window based on the content of the remaining conversation.

Denoising Dialogue Understanding +1

Does Knowledge Help General NLU? An Empirical Study

no code implementations1 Sep 2021 Ruochen Xu, Yuwei Fang, Chenguang Zhu, Michael Zeng

It is often observed in knowledge-centric tasks (e. g., common sense question and answering, relation classification) that the integration of external knowledge such as entity representation into language models can help provide useful information to boost the performance.

Common Sense Reasoning Language Modelling +2

A Joint and Domain-Adaptive Approach to Spoken Language Understanding

no code implementations25 Jul 2021 Linhao Zhang, Yu Shi, Linjun Shou, Ming Gong, Houfeng Wang, Michael Zeng

In this paper, we attempt to bridge these two lines of research and propose a joint and domain adaptive approach to SLU.

Domain Adaptation Intent Detection +2

Retrieval Enhanced Model for Commonsense Generation

1 code implementation Findings (ACL) 2021 Han Wang, Yang Liu, Chenguang Zhu, Linjun Shou, Ming Gong, Yichong Xu, Michael Zeng

Commonsense generation is a challenging task of generating a plausible sentence describing an everyday scenario using provided concepts.

Text Generation

MediaSum: A Large-scale Media Interview Dataset for Dialogue Summarization

1 code implementation NAACL 2021 Chenguang Zhu, Yang Liu, Jie Mei, Michael Zeng

MediaSum, a large-scale media interview dataset consisting of 463. 6K transcripts with abstractive summaries.

Transfer Learning

Generating Human Readable Transcript for Automatic Speech Recognition with Pre-trained Language Model

no code implementations22 Feb 2021 Junwei Liao, Yu Shi, Ming Gong, Linjun Shou, Sefik Eskimez, Liyang Lu, Hong Qu, Michael Zeng

Many downstream tasks and human readers rely on the output of the ASR system; therefore, errors introduced by the speaker and ASR system alike will be propagated to the next task in the pipeline.

Data Augmentation Speech Recognition

Improving Zero-shot Neural Machine Translation on Language-specific Encoders-Decoders

no code implementations12 Feb 2021 Junwei Liao, Yu Shi, Ming Gong, Linjun Shou, Hong Qu, Michael Zeng

However, the performance of using multiple encoders and decoders on zero-shot translation still lags behind universal NMT.

Denoising Machine Translation +1

Speech-language Pre-training for End-to-end Spoken Language Understanding

no code implementations11 Feb 2021 Yao Qian, Ximo Bian, Yu Shi, Naoyuki Kanda, Leo Shen, Zhen Xiao, Michael Zeng

End-to-end (E2E) spoken language understanding (SLU) can infer semantics directly from speech signal without cascading an automatic speech recognizer (ASR) with a natural language understanding (NLU) module.

Language Modelling Natural Language Understanding +1

UniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data

2 code implementations19 Jan 2021 Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang

In this paper, we propose a unified pre-training approach called UniSpeech to learn speech representations with both unlabeled and labeled data, in which supervised phonetic CTC learning and phonetically-aware contrastive self-supervised learning are conducted in a multi-task learning manner.

Multi-Task Learning Representation Learning +2

Fusing Context Into Knowledge Graph for Commonsense Question Answering

1 code implementation Findings (ACL) 2021 Yichong Xu, Chenguang Zhu, Ruochen Xu, Yang Liu, Michael Zeng, Xuedong Huang

However, although a KG contains rich structural information, it lacks the context to provide a more precise understanding of the concepts.

Knowledge Graphs Language Modelling +2

LSTM-LM with Long-Term History for First-Pass Decoding in Conversational Speech Recognition

no code implementations21 Oct 2020 Xie Chen, Sarangarajan Parthasarathy, William Gale, Shuangyu Chang, Michael Zeng

The context information is captured by the hidden states of LSTM-LMs across utterance and can be used to guide the first-pass search effectively.

Speech Recognition

SPLAT: Speech-Language Joint Pre-Training for Spoken Language Understanding

no code implementations NAACL 2021 Yu-An Chung, Chenguang Zhu, Michael Zeng

Besides conducting a self-supervised masked language modeling task on the two individual modules using unpaired speech and text, SPLAT aligns representations from the two modules in a shared latent space using a small amount of paired speech and text.

Language Modelling Spoken Language Understanding

Mind The Facts: Knowledge-Boosted Coherent Abstractive Text Summarization

no code implementations27 Jun 2020 Beliz Gunel, Chenguang Zhu, Michael Zeng, Xuedong Huang

In this work, we propose a novel architecture that extends Transformer encoder-decoder architecture in order to improve on these shortcomings.

Abstractive Text Summarization Language Modelling

Meta Dialogue Policy Learning

no code implementations3 Jun 2020 Yumo Xu, Chenguang Zhu, Baolin Peng, Michael Zeng

Dialog policy determines the next-step actions for agents and hence is central to a dialogue system.

Meta-Learning Transfer Learning

Improving Readability for Automatic Speech Recognition Transcription

no code implementations9 Apr 2020 Junwei Liao, Sefik Emre Eskimez, Liyang Lu, Yu Shi, Ming Gong, Linjun Shou, Hong Qu, Michael Zeng

In this work, we propose a novel NLP task called ASR post-processing for readability (APR) that aims to transform the noisy ASR output into a readable text for humans and downstream tasks while maintaining the semantic meaning of the speaker.

Grammatical Error Correction Speech Recognition

Few-shot Natural Language Generation for Task-Oriented Dialog

2 code implementations Findings of the Association for Computational Linguistics 2020 Baolin Peng, Chenguang Zhu, Chunyuan Li, Xiujun Li, Jinchao Li, Michael Zeng, Jianfeng Gao

It is pre-trained on a large set of annotated NLG corpus to acquire the controllable generation ability, and fine-tuned with only a few domain-specific labels to adapt to new domains.

Data-to-Text Generation Few-Shot Learning

Leveraging Lead Bias for Zero-shot Abstractive News Summarization

no code implementations25 Dec 2019 Chenguang Zhu, Ziyi Yang, Robert Gmyr, Michael Zeng, Xuedong Huang

A typical journalistic convention in news articles is to deliver the most salient information in the beginning, also known as the lead bias.

Domain Adaptation

SIM: A Slot-Independent Neural Model for Dialogue State Tracking

no code implementations WS 2019 Chenguang Zhu, Michael Zeng, Xuedong Huang

In this paper, we put forward a slot-independent neural model (SIM) to track dialogue states while keeping the model complexity invariant to the number of dialogue slots.

Dialogue State Tracking Task-Oriented Dialogue Systems

Make Lead Bias in Your Favor: A Simple and Effective Method for News Summarization

no code implementations25 Sep 2019 Chenguang Zhu, ZiYi Yang, Robert Gmyr, Michael Zeng, Xuedong Huang

For example, the pretrained model without finetuning outperforms pointer-generator network on CNN/DailyMail dataset.

Cannot find the paper you are looking for? You can Submit a new open access paper.