Search Results for author: Mingxuan Wang

Found 67 papers, 29 papers with code

DINOISER: Diffused Conditional Sequence Learning by Manipulating Noises

no code implementations20 Feb 2023 Jiasheng Ye, Zaixiang Zheng, Yu Bao, Lihua Qian, Mingxuan Wang

In this paper, we introduce DINOISER to facilitate diffusion models for sequence generation by manipulating noises.

Diff-Glat: Diffusion Glancing Transformer for Parallel Sequence to Sequence Learning

no code implementations20 Dec 2022 Lihua Qian, Mingxuan Wang, Yang Liu, Hao Zhou

Autoregressive models can achieve high generation quality, but the sequential decoding scheme causes slow decoding speed.

Knowledge Distillation

SEScore2: Retrieval Augmented Pretraining for Text Generation Evaluation

1 code implementation19 Dec 2022 Wenda Xu, Xian Qian, Mingxuan Wang, Lei LI, William Yang Wang

Existing learned metrics have gaps to human judgements, are model-dependent or are limited to the domains or tasks where human ratings are available.

Dialogue Generation Machine Translation +2

Controlling Styles in Neural Machine Translation with Activation Prompt

1 code implementation17 Dec 2022 Yifan Wang, Zewei Sun, Shanbo Cheng, Weiguo Zheng, Mingxuan Wang

First, we re-visit this task and propose a multiway stylized machine translation (MSMT) benchmark, which includes multiple categories of styles in four language directions to push the boundary of this task.

Machine Translation NMT +1

Leveraging per Image-Token Consistency for Vision-Language Pre-training

no code implementations20 Nov 2022 Yunhao Gou, Tom Ko, Hansi Yang, James Kwok, Yu Zhang, Mingxuan Wang

(2) Under-utilization of the unmasked tokens: CMLM primarily focuses on the masked token but it cannot simultaneously leverage other tokens to learn vision-language associations.

Language Modelling Masked Language Modeling

The VolcTrans System for WMT22 Multilingual Machine Translation Task

no code implementations20 Oct 2022 Xian Qian, Kai Hu, Jiaqiang Wang, Yifeng Liu, Xingyuan Pan, Jun Cao, Mingxuan Wang

This report describes our VolcTrans system for the WMT22 shared task on large-scale multilingual machine translation.

Machine Translation Translation

PARAGEN : A Parallel Generation Toolkit

1 code implementation7 Oct 2022 Jiangtao Feng, Yi Zhou, Jun Zhang, Xian Qian, Liwei Wu, Zhexi Zhang, Yanming Liu, Mingxuan Wang, Lei LI, Hao Zhou

PARAGEN is a PyTorch-based NLP toolkit for further development on parallel generation.

Model Selection

Cross-modal Contrastive Learning for Speech Translation

1 code implementation NAACL 2022 Rong Ye, Mingxuan Wang, Lei LI

Learning similar representations for semantically similar speech and text is important for speech translation.

Contrastive Learning Retrieval +3

GigaST: A 10,000-hour Pseudo Speech Translation Corpus

no code implementations8 Apr 2022 Rong Ye, Chengqi Zhao, Tom Ko, Chutong Meng, Tao Wang, Mingxuan Wang, Jun Cao

The training set is translated by a strong machine translation system and the test set is translated by human.

Machine Translation Translation

Unified Multimodal Punctuation Restoration Framework for Mixed-Modality Corpus

1 code implementation24 Jan 2022 Yaoming Zhu, Liwei Wu, Shanbo Cheng, Mingxuan Wang

The punctuation restoration task aims to correctly punctuate the output transcriptions of automatic speech recognition systems.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

LightSeq2: Accelerated Training for Transformer-based Models on GPUs

1 code implementation12 Oct 2021 Xiaohui Wang, Yang Wei, Ying Xiong, Guyue Huang, Xian Qian, Yufei Ding, Mingxuan Wang, Lei LI

In this paper, we present LightSeq2, a system to accelerate training for a general family of Transformer models on GPUs.

Machine Translation Speech Recognition +1

Learning When to Translate for Streaming Speech

1 code implementation ACL 2022 Qianqian Dong, Yaoming Zhu, Mingxuan Wang, Lei LI

Given a usually long speech sequence, we develop an efficient monotonic segmentation module inside an encoder-decoder model to accumulate acoustic information incrementally and detect proper speech unit boundaries for the input in speech translation task.

Speech-to-Text Translation Translation

Secoco: Self-Correcting Encoding for Neural Machine Translation

no code implementations Findings (EMNLP) 2021 Tao Wang, Chengqi Zhao, Mingxuan Wang, Lei LI, Hang Li, Deyi Xiong

This paper presents Self-correcting Encoding (Secoco), a framework that effectively deals with input noise for robust neural machine translation by introducing self-correcting predictors.

Machine Translation NMT +1

Pre-training Methods for Neural Machine Translation

no code implementations ACL 2021 Mingxuan Wang, Lei LI

This tutorial provides a comprehensive guide to make the most of pre-training for neural machine translation.

Machine Translation NMT +1

Contrastive Learning for Many-to-many Multilingual Neural Machine Translation

3 code implementations ACL 2021 Xiao Pan, Mingxuan Wang, Liwei Wu, Lei LI

Existing multilingual machine translation approaches mainly focus on English-centric directions, while the non-English directions still lag behind.

Contrastive Learning Data Augmentation +2

The Volctrans Neural Speech Translation System for IWSLT 2021

1 code implementation ACL (IWSLT) 2021 Chengqi Zhao, Zhicheng Liu, Jian Tong, Tao Wang, Mingxuan Wang, Rong Ye, Qianqian Dong, Jun Cao, Lei LI

For offline speech translation, our best end-to-end model achieves 8. 1 BLEU improvements over the benchmark on the MuST-C test set and is even approaching the results of a strong cascade solution.

Translation

Learning Shared Semantic Space for Speech-to-Text Translation

2 code implementations Findings (ACL) 2021 Chi Han, Mingxuan Wang, Heng Ji, Lei LI

By projecting audio and text features to a common semantic representation, Chimera unifies MT and ST tasks and boosts the performance on ST benchmarks, MuST-C and Augmented Librispeech, to a new state-of-the-art.

Machine Translation Speech-to-Text Translation +1

End-to-end Speech Translation via Cross-modal Progressive Training

1 code implementation21 Apr 2021 Rong Ye, Mingxuan Wang, Lei LI

XSTNet takes both speech and text as input and outputs both transcription and translation text.

Machine Translation Speech-to-Text Translation +1

Non-iterative Parallel Text Generation via Glancing Transformer

no code implementations1 Jan 2021 Lihua Qian, Hao Zhou, Yu Bao, Mingxuan Wang, Lin Qiu, Weinan Zhang, Yong Yu, Lei LI

Although non-autoregressive models with one-iteration generation achieves remarkable inference speed-up, they still falls behind their autoregressive counterparts inprediction accuracy.

Language Modelling Text Generation

Reciprocal Supervised Learning Improves Neural Machine Translation

1 code implementation5 Dec 2020 Minkai Xu, Mingxuan Wang, Zhouhan Lin, Hao Zhou, Weinan Zhang, Lei LI

Despite the recent success on image classification, self-training has only achieved limited gains on structured prediction tasks such as neural machine translation (NMT).

Image Classification Knowledge Distillation +4

Volctrans Parallel Corpus Filtering System for WMT 2020

no code implementations WMT (EMNLP) 2020 Runxin Xu, Zhuo Zhi, Jun Cao, Mingxuan Wang, Lei LI

In this paper, we describe our submissions to the WMT20 shared task on parallel corpus filtering and alignment for low-resource conditions.

Word Alignment

Pre-training Multilingual Neural Machine Translation by Leveraging Alignment Information

1 code implementation EMNLP 2020 Zehui Lin, Xiao Pan, Mingxuan Wang, Xipeng Qiu, Jiangtao Feng, Hao Zhou, Lei LI

We investigate the following question for machine translation (MT): can we develop a single universal MT model to serve as the common seed and obtain derivative and improved models on arbitrary language pairs?

Ranked #3 on Machine Translation on WMT2014 English-French (using extra training data)

Machine Translation Translation

Consecutive Decoding for Speech-to-text Translation

1 code implementation21 Sep 2020 Qianqian Dong, Mingxuan Wang, Hao Zhou, Shuang Xu, Bo Xu, Lei LI

The key idea is to generate source transcript and target translation text with a single decoder.

Machine Translation speech-recognition +3

Xiaomingbot: A Multilingual Robot News Reporter

no code implementations ACL 2020 Runxin Xu, Jun Cao, Mingxuan Wang, Jiaze Chen, Hao Zhou, Ying Zeng, Yu-Ping Wang, Li Chen, Xiang Yin, Xijin Zhang, Songcheng Jiang, Yuxuan Wang, Lei LI

This paper proposes the building of Xiaomingbot, an intelligent, multilingual and multimodal software robot equipped with four integral capabilities: news generation, news translation, news reading and avatar animation.

News Generation Translation +1

Improving Maximum Likelihood Training for Text Generation with Density Ratio Estimation

no code implementations12 Jul 2020 Yuxuan Song, Ning Miao, Hao Zhou, Lantao Yu, Mingxuan Wang, Lei LI

Auto-regressive sequence generative models trained by Maximum Likelihood Estimation suffer the exposure bias problem in practical finite sample scenarios.

Density Ratio Estimation Text Generation

Towards Making the Most of BERT in Neural Machine Translation

2 code implementations15 Aug 2019 Jiacheng Yang, Mingxuan Wang, Hao Zhou, Chengqi Zhao, Yong Yu, Wei-Nan Zhang, Lei LI

Our experiments in machine translation show CTNMT gains of up to 3 BLEU score on the WMT14 English-German language pair which even surpasses the previous state-of-the-art pre-training aided NMT by 1. 4 BLEU score.

Machine Translation NMT +1

Towards Linear Time Neural Machine Translation with Capsule Networks

no code implementations IJCNLP 2019 Mingxuan Wang, Jun Xie, Zhixing Tan, Jinsong Su, Deyi Xiong, Lei LI

In this study, we first investigate a novel capsule network with dynamic routing for linear time Neural Machine Translation (NMT), referred as \textsc{CapsNMT}.

Machine Translation NMT +1

Deep Semantic Role Labeling with Self-Attention

1 code implementation5 Dec 2017 Zhixing Tan, Mingxuan Wang, Jun Xie, Yidong Chen, Xiaodong Shi

Semantic Role Labeling (SRL) is believed to be a crucial step towards natural language understanding and has been widely studied.

Natural Language Understanding Semantic Role Labeling

Incorporating Word Reordering Knowledge into Attention-based Neural Machine Translation

no code implementations ACL 2017 Jinchao Zhang, Mingxuan Wang, Qun Liu, Jie zhou

This paper proposes three distortion models to explicitly incorporate the word reordering knowledge into attention-based Neural Machine Translation (NMT) for further improving translation performance.

Machine Translation NMT +2

Deep Neural Machine Translation with Linear Associative Unit

no code implementations ACL 2017 Mingxuan Wang, Zhengdong Lu, Jie zhou, Qun Liu

Deep Neural Networks (DNNs) have provably enhanced the state-of-the-art Neural Machine Translation (NMT) with their capability in modeling complex functions and capturing complex linguistic structures.

Machine Translation NMT +1

Memory-enhanced Decoder for Neural Machine Translation

no code implementations EMNLP 2016 Mingxuan Wang, Zhengdong Lu, Hang Li, Qun Liu

We propose to enhance the RNN decoder in a neural machine translator (NMT) with external memory, as a natural but powerful extension to the state in the decoding RNN.

Machine Translation NMT +1

$gen$CNN: A Convolutional Architecture for Word Sequence Prediction

no code implementations17 Mar 2015 Mingxuan Wang, Zhengdong Lu, Hang Li, Wenbin Jiang, Qun Liu

Different from previous work on neural network-based language modeling and generation (e. g., RNN or LSTM), we choose not to greedily summarize the history of words as a fixed length vector.

Language Modelling Machine Translation +3

Syntax-based Deep Matching of Short Texts

no code implementations9 Mar 2015 Mingxuan Wang, Zhengdong Lu, Hang Li, Qun Liu

Many tasks in natural language processing, ranging from machine translation to question answering, can be reduced to the problem of matching two sentences or more generally two short texts.

Machine Translation Question Answering +1

Encoding Source Language with Convolutional Neural Network for Machine Translation

no code implementations IJCNLP 2015 Fandong Meng, Zhengdong Lu, Mingxuan Wang, Hang Li, Wenbin Jiang, Qun Liu

The recently proposed neural network joint model (NNJM) (Devlin et al., 2014) augments the n-gram target language model with a heuristically chosen source context window, achieving state-of-the-art performance in SMT.

Language Modelling Machine Translation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.