Search Results for author: Mingxuan Wang

Found 78 papers, 36 papers with code

LightSeq2: Accelerated Training for Transformer-based Models on GPUs

1 code implementation12 Oct 2021 Xiaohui Wang, Yang Wei, Ying Xiong, Guyue Huang, Xian Qian, Yufei Ding, Mingxuan Wang, Lei LI

In this paper, we present LightSeq2, a system to accelerate training for a general family of Transformer models on GPUs.

Machine Translation Speech Recognition +1

Deep Semantic Role Labeling with Self-Attention

1 code implementation5 Dec 2017 Zhixing Tan, Mingxuan Wang, Jun Xie, Yidong Chen, Xiaodong Shi

Semantic Role Labeling (SRL) is believed to be a crucial step towards natural language understanding and has been widely studied.

Natural Language Understanding Semantic Role Labeling

Towards Making the Most of BERT in Neural Machine Translation

2 code implementations15 Aug 2019 Jiacheng Yang, Mingxuan Wang, Hao Zhou, Chengqi Zhao, Yong Yu, Wei-Nan Zhang, Lei LI

Our experiments in machine translation show CTNMT gains of up to 3 BLEU score on the WMT14 English-German language pair which even surpasses the previous state-of-the-art pre-training aided NMT by 1. 4 BLEU score.

Machine Translation NMT +2

The Volctrans Neural Speech Translation System for IWSLT 2021

1 code implementation ACL (IWSLT) 2021 Chengqi Zhao, Zhicheng Liu, Jian Tong, Tao Wang, Mingxuan Wang, Rong Ye, Qianqian Dong, Jun Cao, Lei LI

For offline speech translation, our best end-to-end model achieves 8. 1 BLEU improvements over the benchmark on the MuST-C test set and is even approaching the results of a strong cascade solution.

Translation

PARAGEN : A Parallel Generation Toolkit

1 code implementation7 Oct 2022 Jiangtao Feng, Yi Zhou, Jun Zhang, Xian Qian, Liwei Wu, Zhexi Zhang, Yanming Liu, Mingxuan Wang, Lei LI, Hao Zhou

PARAGEN is a PyTorch-based NLP toolkit for further development on parallel generation.

Model Selection

Pre-training Multilingual Neural Machine Translation by Leveraging Alignment Information

1 code implementation EMNLP 2020 Zehui Lin, Xiao Pan, Mingxuan Wang, Xipeng Qiu, Jiangtao Feng, Hao Zhou, Lei LI

We investigate the following question for machine translation (MT): can we develop a single universal MT model to serve as the common seed and obtain derivative and improved models on arbitrary language pairs?

Ranked #3 on Machine Translation on WMT2014 English-French (using extra training data)

Machine Translation Translation

Contrastive Learning for Many-to-many Multilingual Neural Machine Translation

3 code implementations ACL 2021 Xiao Pan, Mingxuan Wang, Liwei Wu, Lei LI

Existing multilingual machine translation approaches mainly focus on English-centric directions, while the non-English directions still lag behind.

Contrastive Learning Data Augmentation +2

Cross-modal Contrastive Learning for Speech Translation

1 code implementation NAACL 2022 Rong Ye, Mingxuan Wang, Lei LI

Learning similar representations for semantically similar speech and text is important for speech translation.

Contrastive Learning Retrieval +3

Learning Shared Semantic Space for Speech-to-Text Translation

2 code implementations Findings (ACL) 2021 Chi Han, Mingxuan Wang, Heng Ji, Lei LI

By projecting audio and text features to a common semantic representation, Chimera unifies MT and ST tasks and boosts the performance on ST benchmarks, MuST-C and Augmented Librispeech, to a new state-of-the-art.

Machine Translation Speech-to-Text Translation +1

Consecutive Decoding for Speech-to-text Translation

1 code implementation21 Sep 2020 Qianqian Dong, Mingxuan Wang, Hao Zhou, Shuang Xu, Bo Xu, Lei LI

The key idea is to generate source transcript and target translation text with a single decoder.

Machine Translation speech-recognition +3

MobileNMT: Enabling Translation in 15MB and 30ms

1 code implementation7 Jun 2023 Ye Lin, Xiaohui Wang, Zhexi Zhang, Mingxuan Wang, Tong Xiao, Jingbo Zhu

With the co-design of model and engine, compared with the existing system, we speed up 47. 0x and save 99. 5% of memory with only 11. 6% loss of BLEU.

Model Compression NMT +2

Learning When to Translate for Streaming Speech

1 code implementation ACL 2022 Qianqian Dong, Yaoming Zhu, Mingxuan Wang, Lei LI

Given a usually long speech sequence, we develop an efficient monotonic segmentation module inside an encoder-decoder model to accumulate acoustic information incrementally and detect proper speech unit boundaries for the input in speech translation task.

Sentence Speech-to-Text Translation +1

DUB: Discrete Unit Back-translation for Speech Translation

1 code implementation19 May 2023 Dong Zhang, Rong Ye, Tom Ko, Mingxuan Wang, Yaqian Zhou

The key point is to bridge the modality gap between speech and text so that useful MT techniques can be applied to ST.

Machine Translation Speech-to-Text Translation +1

DINOISER: Diffused Conditional Sequence Learning by Manipulating Noises

1 code implementation20 Feb 2023 Jiasheng Ye, Zaixiang Zheng, Yu Bao, Lihua Qian, Mingxuan Wang

In this paper, we introduce DINOISER to facilitate diffusion models for sequence generation by manipulating noises.

Unified Multimodal Punctuation Restoration Framework for Mixed-Modality Corpus

1 code implementation24 Jan 2022 Yaoming Zhu, Liwei Wu, Shanbo Cheng, Mingxuan Wang

The punctuation restoration task aims to correctly punctuate the output transcriptions of automatic speech recognition systems.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

End-to-end Speech Translation via Cross-modal Progressive Training

1 code implementation21 Apr 2021 Rong Ye, Mingxuan Wang, Lei LI

XSTNet takes both speech and text as input and outputs both transcription and translation text.

Machine Translation Speech-to-Text Translation +1

Controlling Styles in Neural Machine Translation with Activation Prompt

1 code implementation17 Dec 2022 Yifan Wang, Zewei Sun, Shanbo Cheng, Weiguo Zheng, Mingxuan Wang

Controlling styles in neural machine translation (NMT) has attracted wide attention, as it is crucial for enhancing user experience.

Machine Translation NMT +1

SESCORE2: Learning Text Generation Evaluation via Synthesizing Realistic Mistakes

1 code implementation19 Dec 2022 Wenda Xu, Xian Qian, Mingxuan Wang, Lei LI, William Yang Wang

In this paper, we propose SESCORE2, a self-supervised approach for training a model-based metric for text generation evaluation.

Dialogue Generation Machine Translation +2

BigVideo: A Large-scale Video Subtitle Translation Dataset for Multimodal Machine Translation

1 code implementation23 May 2023 Liyan Kang, Luyang Huang, Ningxin Peng, Peihao Zhu, Zewei Sun, Shanbo Cheng, Mingxuan Wang, Degen Huang, Jinsong Su

We also introduce two deliberately designed test sets to verify the necessity of visual information: Ambiguous with the presence of ambiguous words, and Unambiguous in which the text context is self-contained for translation.

Contrastive Learning Multimodal Machine Translation +3

CTC-based Non-autoregressive Speech Translation

1 code implementation27 May 2023 Chen Xu, Xiaoqian Liu, Xiaowen Liu, Qingxuan Sun, Yuhao Zhang, Murun Yang, Qianqian Dong, Tom Ko, Mingxuan Wang, Tong Xiao, Anxiang Ma, Jingbo Zhu

Combining end-to-end speech translation (ST) and non-autoregressive (NAR) generation is promising in language and speech processing for their advantages of less error propagation and low latency.

Translation

GigaST: A 10,000-hour Pseudo Speech Translation Corpus

1 code implementation8 Apr 2022 Rong Ye, Chengqi Zhao, Tom Ko, Chutong Meng, Tao Wang, Mingxuan Wang, Jun Cao

The training set is translated by a strong machine translation system and the test set is translated by human.

Machine Translation Translation

Deep Neural Machine Translation with Linear Associative Unit

no code implementations ACL 2017 Mingxuan Wang, Zhengdong Lu, Jie zhou, Qun Liu

Deep Neural Networks (DNNs) have provably enhanced the state-of-the-art Neural Machine Translation (NMT) with their capability in modeling complex functions and capturing complex linguistic structures.

Machine Translation NMT +1

Memory-enhanced Decoder for Neural Machine Translation

no code implementations EMNLP 2016 Mingxuan Wang, Zhengdong Lu, Hang Li, Qun Liu

We propose to enhance the RNN decoder in a neural machine translator (NMT) with external memory, as a natural but powerful extension to the state in the decoding RNN.

Machine Translation NMT +2

Syntax-based Deep Matching of Short Texts

no code implementations9 Mar 2015 Mingxuan Wang, Zhengdong Lu, Hang Li, Qun Liu

Many tasks in natural language processing, ranging from machine translation to question answering, can be reduced to the problem of matching two sentences or more generally two short texts.

Machine Translation Question Answering +1

Encoding Source Language with Convolutional Neural Network for Machine Translation

no code implementations IJCNLP 2015 Fandong Meng, Zhengdong Lu, Mingxuan Wang, Hang Li, Wenbin Jiang, Qun Liu

The recently proposed neural network joint model (NNJM) (Devlin et al., 2014) augments the n-gram target language model with a heuristically chosen source context window, achieving state-of-the-art performance in SMT.

Language Modelling Machine Translation +2

$gen$CNN: A Convolutional Architecture for Word Sequence Prediction

no code implementations17 Mar 2015 Mingxuan Wang, Zhengdong Lu, Hang Li, Wenbin Jiang, Qun Liu

Different from previous work on neural network-based language modeling and generation (e. g., RNN or LSTM), we choose not to greedily summarize the history of words as a fixed length vector.

Language Modelling Machine Translation +3

Towards Linear Time Neural Machine Translation with Capsule Networks

no code implementations IJCNLP 2019 Mingxuan Wang, Jun Xie, Zhixing Tan, Jinsong Su, Deyi Xiong, Lei LI

In this study, we first investigate a novel capsule network with dynamic routing for linear time Neural Machine Translation (NMT), referred as \textsc{CapsNMT}.

Machine Translation NMT +2

Incorporating Word Reordering Knowledge into Attention-based Neural Machine Translation

no code implementations ACL 2017 Jinchao Zhang, Mingxuan Wang, Qun Liu, Jie zhou

This paper proposes three distortion models to explicitly incorporate the word reordering knowledge into attention-based Neural Machine Translation (NMT) for further improving translation performance.

Machine Translation NMT +2

Improving Maximum Likelihood Training for Text Generation with Density Ratio Estimation

no code implementations12 Jul 2020 Yuxuan Song, Ning Miao, Hao Zhou, Lantao Yu, Mingxuan Wang, Lei LI

Auto-regressive sequence generative models trained by Maximum Likelihood Estimation suffer the exposure bias problem in practical finite sample scenarios.

Density Ratio Estimation Text Generation

Xiaomingbot: A Multilingual Robot News Reporter

no code implementations ACL 2020 Runxin Xu, Jun Cao, Mingxuan Wang, Jiaze Chen, Hao Zhou, Ying Zeng, Yu-Ping Wang, Li Chen, Xiang Yin, Xijin Zhang, Songcheng Jiang, Yuxuan Wang, Lei LI

This paper proposes the building of Xiaomingbot, an intelligent, multilingual and multimodal software robot equipped with four integral capabilities: news generation, news translation, news reading and avatar animation.

News Generation Translation +1

Non-iterative Parallel Text Generation via Glancing Transformer

no code implementations1 Jan 2021 Lihua Qian, Hao Zhou, Yu Bao, Mingxuan Wang, Lin Qiu, Weinan Zhang, Yong Yu, Lei LI

Although non-autoregressive models with one-iteration generation achieves remarkable inference speed-up, they still falls behind their autoregressive counterparts inprediction accuracy.

Language Modelling Text Generation

Volctrans Parallel Corpus Filtering System for WMT 2020

no code implementations WMT (EMNLP) 2020 Runxin Xu, Zhuo Zhi, Jun Cao, Mingxuan Wang, Lei LI

In this paper, we describe our submissions to the WMT20 shared task on parallel corpus filtering and alignment for low-resource conditions.

Sentence Word Alignment

Reciprocal Supervised Learning Improves Neural Machine Translation

1 code implementation5 Dec 2020 Minkai Xu, Mingxuan Wang, Zhouhan Lin, Hao Zhou, Weinan Zhang, Lei LI

Despite the recent success on image classification, self-training has only achieved limited gains on structured prediction tasks such as neural machine translation (NMT).

Image Classification Knowledge Distillation +4

Secoco: Self-Correcting Encoding for Neural Machine Translation

no code implementations Findings (EMNLP) 2021 Tao Wang, Chengqi Zhao, Mingxuan Wang, Lei LI, Hang Li, Deyi Xiong

This paper presents Self-correcting Encoding (Secoco), a framework that effectively deals with input noise for robust neural machine translation by introducing self-correcting predictors.

Machine Translation NMT +1

Pre-training Methods for Neural Machine Translation

no code implementations ACL 2021 Mingxuan Wang, Lei LI

This tutorial provides a comprehensive guide to make the most of pre-training for neural machine translation.

Machine Translation NMT +1

The VolcTrans System for WMT22 Multilingual Machine Translation Task

no code implementations20 Oct 2022 Xian Qian, Kai Hu, Jiaqiang Wang, Yifeng Liu, Xingyuan Pan, Jun Cao, Mingxuan Wang

This report describes our VolcTrans system for the WMT22 shared task on large-scale multilingual machine translation.

Machine Translation Translation

Leveraging per Image-Token Consistency for Vision-Language Pre-training

no code implementations CVPR 2023 Yunhao Gou, Tom Ko, Hansi Yang, James Kwok, Yu Zhang, Mingxuan Wang

(2) Under-utilization of the unmasked tokens: CMLM primarily focuses on the masked token but it cannot simultaneously leverage other tokens to learn vision-language associations.

Language Modelling Masked Language Modeling +1

Diffusion Glancing Transformer for Parallel Sequence to Sequence Learning

no code implementations20 Dec 2022 Lihua Qian, Mingxuan Wang, Yang Liu, Hao Zhou

Previously, non-autoregressive models were widely perceived as being superior in generation efficiency but inferior in generation quality due to the difficulties of modeling multiple target modalities.

Knowledge Distillation Machine Translation +1

Understanding Parameter Sharing in Transformers

no code implementations15 Jun 2023 Ye Lin, Mingxuan Wang, Zhexi Zhang, Xiaohui Wang, Tong Xiao, Jingbo Zhu

Inspired by this, we tune the training hyperparameters related to model convergence in a targeted manner.

Machine Translation

Recent Advances in Direct Speech-to-text Translation

no code implementations20 Jun 2023 Chen Xu, Rong Ye, Qianqian Dong, Chengqi Zhao, Tom Ko, Mingxuan Wang, Tong Xiao, Jingbo Zhu

Recently, speech-to-text translation has attracted more and more attention and many studies have emerged rapidly.

Data Augmentation Knowledge Distillation +2

MOSPC: MOS Prediction Based on Pairwise Comparison

no code implementations18 Jun 2023 Kexin Wang, Yunlong Zhao, Qianqian Dong, Tom Ko, Mingxuan Wang

And our framework also surpasses the strong baseline in ranking accuracy on each fine-grained segment.

BLEURT Has Universal Translations: An Analysis of Automatic Metrics by Minimum Risk Training

no code implementations6 Jul 2023 Yiming Yan, Tao Wang, Chengqi Zhao, ShuJian Huang, Jiajun Chen, Mingxuan Wang

In this study, we systematically analyze and compare various mainstream and cutting-edge automatic metrics from the perspective of their guidance for training machine translation systems.

Machine Translation Sentence +1

Only 5\% Attention Is All You Need: Efficient Long-range Document-level Neural Machine Translation

no code implementations25 Sep 2023 Zihan Liu, Zewei Sun, Shanbo Cheng, ShuJian Huang, Mingxuan Wang

Document-level Neural Machine Translation (DocNMT) has been proven crucial for handling discourse phenomena by introducing document-level context information.

Dimensionality Reduction Machine Translation +1

Speech Translation with Large Language Models: An Industrial Practice

no code implementations21 Dec 2023 Zhichao Huang, Rong Ye, Tom Ko, Qianqian Dong, Shanbo Cheng, Mingxuan Wang, Hang Li

Given the great success of large language models (LLMs) across various tasks, in this paper, we introduce LLM-ST, a novel and effective speech translation model constructed upon a pre-trained LLM.

Language Modelling Large Language Model +1

Cannot find the paper you are looking for? You can Submit a new open access paper.