Search Results for author: Qun Liu

Found 291 papers, 79 papers with code

Encoding Source Language with Convolutional Neural Network for Machine Translation

no code implementations IJCNLP 2015 Fandong Meng, Zhengdong Lu, Mingxuan Wang, Hang Li, Wenbin Jiang, Qun Liu

The recently proposed neural network joint model (NNJM) (Devlin et al., 2014) augments the n-gram target language model with a heuristically chosen source context window, achieving state-of-the-art performance in SMT.

Language Modelling Machine Translation +2

Syntax-based Deep Matching of Short Texts

no code implementations9 Mar 2015 Mingxuan Wang, Zhengdong Lu, Hang Li, Qun Liu

Many tasks in natural language processing, ranging from machine translation to question answering, can be reduced to the problem of matching two sentences or more generally two short texts.

Machine Translation Question Answering +1

$gen$CNN: A Convolutional Architecture for Word Sequence Prediction

no code implementations17 Mar 2015 Mingxuan Wang, Zhengdong Lu, Hang Li, Wenbin Jiang, Qun Liu

Different from previous work on neural network-based language modeling and generation (e. g., RNN or LSTM), we choose not to greedily summarize the history of words as a fixed length vector.

Language Modelling Machine Translation +3

A Deep Memory-based Architecture for Sequence-to-Sequence Learning

no code implementations22 Jun 2015 Fandong Meng, Zhengdong Lu, Zhaopeng Tu, Hang Li, Qun Liu

We propose DEEPMEMORY, a novel deep architecture for sequence-to-sequence learning, which performs the task through a series of nonlinear transformations from the representation of the input sequence (e. g., a Chinese sentence) to the final output sequence (e. g., translation to English).

Machine Translation Sentence +1

An Automatic Machine Translation Evaluation Metric Based on Dependency Parsing Model

no code implementations9 Aug 2015 Hui Yu, Xiaofeng Wu, Wenbin Jiang, Qun Liu, ShouXun Lin

To avoid these problems, we propose a novel automatic evaluation metric based on dependency parsing model, with no need to define sub-structures by human.

Dependency Parsing Machine Translation +2

Variational Neural Discourse Relation Recognizer

1 code implementation EMNLP 2016 Biao Zhang, Deyi Xiong, Jinsong Su, Qun Liu, Rongrong Ji, Hong Duan, Min Zhang

In order to perform efficient inference and learning, we introduce neural discourse relation models to approximate the prior and posterior distributions of the latent variable, and employ these approximated distributions to optimize a reparameterized variational lower bound.

Relation

A Novel Approach to Dropped Pronoun Translation

no code implementations NAACL 2016 Long-Yue Wang, Zhaopeng Tu, Xiaojun Zhang, Hang Li, Andy Way, Qun Liu

Finally, we integrate the above outputs into our translation system to recall missing pronouns by both extracting rules from the DP-labelled training data and translating the DP-generated input sentences.

Machine Translation Translation

ProphetMT: A Tree-based SMT-driven Controlled Language Authoring/Post-Editing Tool

no code implementations LREC 2016 Xiaofeng Wu, Jinhua Du, Qun Liu, Andy Way

This paper presents ProphetMT, a tree-based SMT-driven Controlled Language (CL) authoring and post-editing tool.

Translation

Automatic Construction of Discourse Corpora for Dialogue Translation

no code implementations LREC 2016 Long-Yue Wang, Xiaojun Zhang, Zhaopeng Tu, Andy Way, Qun Liu

Then tags such as speaker and discourse boundary from the script data are projected to its subtitle data via an information retrieval approach in order to map monolingual discourse to bilingual texts.

Information Retrieval Language Modelling +3

Memory-enhanced Decoder for Neural Machine Translation

no code implementations EMNLP 2016 Mingxuan Wang, Zhengdong Lu, Hang Li, Qun Liu

We propose to enhance the RNN decoder in a neural machine translator (NMT) with external memory, as a natural but powerful extension to the state in the decoding RNN.

Machine Translation NMT +2

Interactive Attention for Neural Machine Translation

no code implementations COLING 2016 Fandong Meng, Zhengdong Lu, Hang Li, Qun Liu

Conventional attention-based Neural Machine Translation (NMT) conducts dynamic alignment in generating the target sentence.

Machine Translation NMT +2

Enriching Phrase Tables for Statistical Machine Translation Using Mixed Embeddings

no code implementations COLING 2016 Peyman Passban, Qun Liu, Andy Way

PBSMT engines by default provide four probability scores in phrase tables which are considered as the main set of bilingual features.

Document Classification Machine Translation +3

Fast Gated Neural Domain Adaptation: Language Model as a Case Study

no code implementations COLING 2016 Jian Zhang, Xiaofeng Wu, Andy Way, Qun Liu

We show that the neural LM perplexity can be reduced by 7. 395 and 12. 011 using the proposed domain adaptation mechanism on the Penn Treebank and News data, respectively.

Domain Adaptation Language Modelling +2

Topic-Informed Neural Machine Translation

no code implementations COLING 2016 Jian Zhang, Liangyou Li, Andy Way, Qun Liu

In recent years, neural machine translation (NMT) has demonstrated state-of-the-art machine translation (MT) performance.

Machine Translation NMT +2

A subtree-based factorization of dependency parsing

no code implementations COLING 2016 Qiuye Zhao, Qun Liu

For Chinese, the most notable increase is as high as 3. 63 (UAS) when the proposed framework is applied to first-order parsing models.

Dependency Parsing

Incorporating Global Visual Features into Attention-Based Neural Machine Translation

no code implementations23 Jan 2017 Iacer Calixto, Qun Liu, Nick Campbell

We introduce multi-modal, attention-based neural machine translation (NMT) models which incorporate visual features into different parts of both the encoder and the decoder.

Multimodal Machine Translation NMT +2

Multilingual Multi-modal Embeddings for Natural Language Processing

no code implementations3 Feb 2017 Iacer Calixto, Qun Liu, Nick Campbell

We propose a novel discriminative model that learns embeddings from multilingual and multi-modal data, meaning that our model can take advantage of images and descriptions in multiple languages to improve embedding quality.

Machine Translation NMT +5

Doubly-Attentive Decoder for Multi-modal Neural Machine Translation

no code implementations ACL 2017 Iacer Calixto, Qun Liu, Nick Campbell

We introduce a Multi-modal Neural Machine Translation model in which a doubly-attentive decoder naturally incorporates spatial visual features obtained using pre-trained convolutional neural networks, bridging the gap between image description and translation.

Multimodal Machine Translation Translation

Improving Evaluation of Document-level Machine Translation Quality Estimation

no code implementations EACL 2017 Yvette Graham, Qingsong Ma, Timothy Baldwin, Qun Liu, Carla Parra, Carolina Scarton

Meaningful conclusions about the relative performance of NLP systems are only possible if the gold standard employed in a given evaluation is both valid and reliable.

Document Level Machine Translation Machine Translation +2

Neural Automatic Post-Editing Using Prior Alignment and Reranking

no code implementations EACL 2017 Santanu Pal, Sudip Kumar Naskar, Mihaela Vela, Qun Liu, Josef van Genabith

APE translations produced by our system show statistically significant improvements over the first-stage MT, phrase-based APE and the best reported score on the WMT 2016 APE dataset by a previous neural APE system.

Automatic Post-Editing NMT +2

Context-Aware Graph Segmentation for Graph-Based Translation

no code implementations EACL 2017 Liangyou Li, Andy Way, Qun Liu

In this paper, we present an improved graph-based translation model which segments an input graph into node-induced subgraphs by taking source context into consideration.

Segmentation Translation

Deep Neural Machine Translation with Linear Associative Unit

no code implementations ACL 2017 Mingxuan Wang, Zhengdong Lu, Jie zhou, Qun Liu

Deep Neural Networks (DNNs) have provably enhanced the state-of-the-art Neural Machine Translation (NMT) with their capability in modeling complex functions and capturing complex linguistic structures.

Machine Translation NMT +1

Incorporating Word Reordering Knowledge into Attention-based Neural Machine Translation

no code implementations ACL 2017 Jinchao Zhang, Mingxuan Wang, Qun Liu, Jie zhou

This paper proposes three distortion models to explicitly incorporate the word reordering knowledge into attention-based Neural Machine Translation (NMT) for further improving translation performance.

Machine Translation NMT +2

Sentence-Level Multilingual Multi-modal Embedding for Natural Language Processing

no code implementations RANLP 2017 Iacer Calixto, Qun Liu

We propose a novel discriminative ranking model that learns embeddings from multilingual and multi-modal data, meaning that our model can take advantage of images and descriptions in multiple languages to improve embedding quality.

Machine Translation NMT +5

Further Investigation into Reference Bias in Monolingual Evaluation of Machine Translation

1 code implementation EMNLP 2017 Qingsong Ma, Yvette Graham, Timothy Baldwin, Qun Liu

Monolingual evaluation of Machine Translation (MT) aims to simplify human assessment by requiring assessors to compare the meaning of the MT output with a reference translation, opening up the task to a much larger pool of genuinely qualified evaluators.

Machine Translation Translation

Incorporating Global Visual Features into Attention-based Neural Machine Translation.

no code implementations EMNLP 2017 Iacer Calixto, Qun Liu

We introduce multi-modal, attention-based neural machine translation (NMT) models which incorporate visual features into different parts of both the encoder and the decoder.

Machine Translation NMT +4

Information-Propogation-Enhanced Neural Machine Translation by Relation Model

no code implementations6 Sep 2017 Wen Zhang, Jiawei Hu, Yang Feng, Qun Liu

Even though sequence-to-sequence neural machine translation (NMT) model have achieved state-of-art performance in the recent fewer years, but it is widely concerned that the recurrent neural network (RNN) units are very hard to capture the long-distance state information, which means RNN can hardly find the feature with long term dependency as the sequence becomes longer.

Machine Translation NMT +4

Refining Source Representations with Relation Networks for Neural Machine Translation

no code implementations12 Sep 2017 Wen Zhang, Jiawei Hu, Yang Feng, Qun Liu

Although neural machine translation (NMT) with the encoder-decoder framework has achieved great success in recent times, it still suffers from some drawbacks: RNNs tend to forget old information which is often useful and the encoder only operates through words without considering word relationship.

Machine Translation NMT +2

CASICT Tibetan Word Segmentation System for MLWS2017

1 code implementation17 Oct 2017 Jiawei Hu, Qun Liu

We participated in the MLWS 2017 on Tibetan word segmentation task, our system is trained in a unrestricted way, by introducing a baseline system and 76w tibetan segmented sentences of ours.

Segmentation

Semantics-Enhanced Task-Oriented Dialogue Translation: A Case Study on Hotel Booking

no code implementations IJCNLP 2017 Long-Yue Wang, Jinhua Du, Liangyou Li, Zhaopeng Tu, Andy Way, Qun Liu

We showcase TODAY, a semantics-enhanced task-oriented dialogue translation system, whose novelties are: (i) task-oriented named entity (NE) definition and a hybrid strategy for NE recognition and translation; and (ii) a novel grounded semantic method for dialogue understanding and task-order management.

Dialogue Understanding Machine Translation +3

Translating Pro-Drop Languages with Reconstruction Models

1 code implementation10 Jan 2018 Long-Yue Wang, Zhaopeng Tu, Shuming Shi, Tong Zhang, Yvette Graham, Qun Liu

Next, the annotated source sentence is reconstructed from hidden representations in the NMT model.

Machine Translation NMT +2

Unsupervised Learning using Pretrained CNN and Associative Memory Bank

no code implementations2 May 2018 Qun Liu, Supratik Mukhopadhyay

In this paper, we present a new architecture and an approach for unsupervised object recognition that addresses the above mentioned problem with fine tuning associated with pretrained CNN-based supervised deep learning approaches while allowing automated feature extraction.

Few-Shot Image Classification Fine-Grained Image Classification +2

SafeRNet: Safe Transportation Routing in the era of Internet of Vehicles and Mobile Crowd Sensing

no code implementations3 May 2018 Qun Liu, Suman Kumar, Vijay Mago

This paper proposes SafeRNet, a safe route computation framework which takes advantage of these technologies to analyze streaming traffic data and historical data to effectively infer safe routes and deliver them back to users in real time.

Cloud Computing

Refining Source Representations with Relation Networks for Neural Machine Translation

no code implementations COLING 2018 Wen Zhang, Jiawei Hu, Yang Feng, Qun Liu

Although neural machine translation with the encoder-decoder framework has achieved great success recently, it still suffers drawbacks of forgetting distant information, which is an inherent disadvantage of recurrent neural network structure, and disregarding relationship between source words during encoding step.

Machine Translation Memorization +2

Understanding Meanings in Multilingual Customer Feedback

no code implementations5 Jun 2018 Chao-Hong Liu, Declan Groves, Akira Hayakawa, Alberto Poncelas, Qun Liu

Understanding and being able to react to customer feedback is the most fundamental task in providing good customer service.

General Classification

Knowledge Diffusion for Neural Dialogue Generation

1 code implementation ACL 2018 Shuman Liu, Hongshen Chen, Zhaochun Ren, Yang Feng, Qun Liu, Dawei Yin

Our empirical study on a real-world dataset prove that our model is capable of generating meaningful, diverse and natural responses for both factoid-questions and knowledge grounded chi-chats.

Dialogue Generation Question Answering +1

Multimodal Neural Machine Translation for Low-resource Language Pairs using Synthetic Data

no code implementations WS 2018 Koel Dutta Chowdhury, Mohammed Hasanuzzaman, Qun Liu

In this paper, we investigate the effectiveness of training a multimodal neural machine translation (MNMT) system with image features for a low-resource language pair, Hindi and English, using synthetic data.

Machine Translation Question Answering +3

Tailoring Neural Architectures for Translating from Morphologically Rich Languages

no code implementations COLING 2018 Peyman Passban, Andy Way, Qun Liu

A morphologically complex word (MCW) is a hierarchical constituent with meaning-preserving subunits, so word-based models which rely on surface forms might not be powerful enough to translate such structures.

Machine Translation NMT +2

Learning to Jointly Translate and Predict Dropped Pronouns with a Shared Reconstruction Mechanism

no code implementations EMNLP 2018 Long-Yue Wang, Zhaopeng Tu, Andy Way, Qun Liu

Pronouns are frequently omitted in pro-drop languages, such as Chinese, generally leading to significant challenges with respect to the production of complete translations.

Machine Translation Translation

Improving the Robustness of Speech Translation

no code implementations2 Nov 2018 Xiang Li, Haiyang Xue, Wei Chen, Yang Liu, Yang Feng, Qun Liu

Although neural machine translation (NMT) has achieved impressive progress recently, it is usually trained on the clean parallel data set and hence cannot work well when the input sentence is the production of the automatic speech recognition (ASR) system due to the enormous errors in the source.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +5

Improving Domain Adaptation Translation with Domain Invariant and Specific Information

no code implementations NAACL 2019 Shuhao Gu, Yang Feng, Qun Liu

Besides, we add a discriminator to the shared encoder and employ adversarial training for the whole model to reinforce the performance of information separation and machine translation simultaneously.

Domain Adaptation Machine Translation +1

Bilingual-GAN: A Step Towards Parallel Text Generation

no code implementations WS 2019 Ahmad Rashid, Alan Do-Omri, Md. Akmal Haidar, Qun Liu, Mehdi Rezagholizadeh

Latent space based GAN methods and attention based sequence to sequence models have achieved impressive results in text generation and unsupervised machine translation respectively.

Denoising Text Generation +2

ERNIE: Enhanced Language Representation with Informative Entities

2 code implementations ACL 2019 Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, Qun Liu

Neural language representation models such as BERT pre-trained on large-scale corpora can well capture rich semantic patterns from plain text, and be fine-tuned to consistently improve the performance of various NLP tasks.

Entity Linking Entity Typing +6

Bridging the Gap between Training and Inference for Neural Machine Translation

no code implementations ACL 2019 Wen Zhang, Yang Feng, Fandong Meng, Di You, Qun Liu

Neural Machine Translation (NMT) generates target words sequentially in the way of predicting the next word conditioned on the context words.

Machine Translation NMT +2

Decomposable Neural Paraphrase Generation

no code implementations ACL 2019 Zichao Li, Xin Jiang, Lifeng Shang, Qun Liu

Paraphrasing exists at different granularity levels, such as lexical level, phrasal level and sentential level.

Paraphrase Generation Sentence +1

GPT-based Generation for Classical Chinese Poetry

2 code implementations29 Jun 2019 Yi Liao, Yasheng Wang, Qun Liu, Xin Jiang

We present a simple yet effective method for generating high quality classical Chinese poetry with Generative Pre-trained Language Model (GPT).

Language Modelling

Modeling Semantic Compositionality with Sememe Knowledge

1 code implementation ACL 2019 Fanchao Qi, Jun-Jie Huang, Chenghao Yang, Zhiyuan Liu, Xiao Chen, Qun Liu, Maosong Sun

In this paper, we verify the effectiveness of sememes, the minimum semantic units of human languages, in modeling SC by a confirmatory experiment.

multi-word expression embedding multi-word expression sememe prediction

Huawei's NMT Systems for the WMT 2019 Biomedical Translation Task

no code implementations WS 2019 Wei Peng, Jianfeng Liu, Liangyou Li, Qun Liu

This paper describes Huawei{'}s neural machine translation systems for the WMT 2019 biomedical translation shared task.

Domain Adaptation Machine Translation +3

PCGAN-CHAR: Progressively Trained Classifier Generative Adversarial Networks for Classification of Noisy Handwritten Bangla Characters

no code implementations11 Aug 2019 Qun Liu, Edward Collier, Supratik Mukhopadhyay

We show that by learning the features at each resolution independently a trained model is able to accurately classify characters even in the presence of noise.

Classification Denoising +3

Dialog State Tracking with Reinforced Data Augmentation

no code implementations21 Aug 2019 Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Qun Liu

Neural dialog state trackers are generally limited due to the lack of quantity and diversity of annotated training data.

Data Augmentation dialog state tracking +1

NEZHA: Neural Contextualized Representation for Chinese Language Understanding

10 code implementations31 Aug 2019 Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen, Qun Liu

The pre-trained language models have achieved great successes in various natural language understanding (NLU) tasks due to its capacity to capture the deep contextualized information in text by pre-training on large-scale corpora.

named-entity-recognition Named Entity Recognition +6

TinyBERT: Distilling BERT for Natural Language Understanding

7 code implementations Findings of the Association for Computational Linguistics 2020 Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, Qun Liu

To accelerate inference and reduce model size while maintaining accuracy, we first propose a novel Transformer distillation method that is specially designed for knowledge distillation (KD) of the Transformer-based models.

Knowledge Distillation Language Modelling +6

Improving Sequence Modeling Ability of Recurrent Neural Networks via Sememes

1 code implementation20 Oct 2019 Yujia Qin, Fanchao Qi, Sicong Ouyang, Zhiyuan Liu, Cheng Yang, Yasheng Wang, Qun Liu, Maosong Sun

Sememes, the minimum semantic units of human languages, have been successfully utilized in various natural language processing applications.

Adversarial Attack Language Modelling +2

Word-level Textual Adversarial Attacking as Combinatorial Optimization

1 code implementation ACL 2020 Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, Maosong Sun

Also, further experiments show our model has higher transferability and can bring more robustness enhancement to victim models by adversarial training.

Adversarial Attack Combinatorial Optimization +3

A General Framework for Adaptation of Neural Machine Translation to Simultaneous Translation

no code implementations Asian Chapter of the Association for Computational Linguistics 2020 Yun Chen, Liangyou Li, Xin Jiang, Xiao Chen, Qun Liu

Despite the success of neural machine translation (NMT), simultaneous neural machine translation (SNMT), the task of translating in real time before a full sentence has been observed, remains challenging due to the syntactic structure difference and simultaneity requirements.

Machine Translation NMT +2

Pretrained Language Models for Document-Level Neural Machine Translation

no code implementations8 Nov 2019 Liangyou Li, Xin Jiang, Qun Liu

Previous work on document-level NMT usually focuses on limited contexts because of degraded performance on larger contexts.

Machine Translation NMT +2

Zero-Shot Paraphrase Generation with Multilingual Language Models

no code implementations9 Nov 2019 Yinpeng Guo, Yi Liao, Xin Jiang, Qing Zhang, Yibo Zhang, Qun Liu

Leveraging multilingual parallel texts to automatically generate paraphrases has drawn much attention as size of high-quality paraphrase corpus is limited.

Denoising Machine Translation +3

Deep-seismic-prior-based reconstruction of seismic data using convolutional neural networks

no code implementations20 Nov 2019 Qun Liu, Lihua Fu, Meng Zhang

Synthetic and field data were tested to assess the performance of the proposed algorithm (DSPRecon algorithm); the advantages of using our method were evaluated by comparing it with the singular spectrum analysis (SSA) method for irregular data reconstruction and de-aliased Cadzow method for regular data reconstruction.

Integrating Graph Contextualized Knowledge into Pre-trained Language Models

no code implementations30 Nov 2019 Bin He, Di Zhou, Jinghui Xiao, Xin Jiang, Qun Liu, Nicholas Jing Yuan, Tong Xu

Complex node interactions are common in knowledge graphs, and these interactions also contain rich knowledge information.

Knowledge Graphs Representation Learning

Learning to Predict Explainable Plots for Neural Story Generation

no code implementations5 Dec 2019 Gang Chen, Yang Liu, Huanbo Luan, Meng Zhang, Qun Liu, Maosong Sun

While the use of neural networks has proven effective in improving story generation, how to learn to generate an explainable high-level plot still remains a major challenge.

Sentence Story Generation

Multi-channel Reverse Dictionary Model

1 code implementation18 Dec 2019 Lei Zhang, Fanchao Qi, Zhiyuan Liu, Yasheng Wang, Qun Liu, Maosong Sun

A reverse dictionary takes the description of a target word as input and outputs the target word together with other words that match the description.

Reverse Dictionary Sentence

Context-Aware Design of Cyber-Physical Human Systems (CPHS)

no code implementations7 Jan 2020 Supratik Mukhopadhyay, Qun Liu, Edward Collier, Yimin Zhu, Ravindra Gudishala, Chanachok Chokwitthaya, Robert DiBiano, Alimire Nabijiang, Sanaz Saeidi, Subhajit Sidhanta, Arnab Ganguly

The impacts of context factors driving human system interaction are challenging and are difficult to capture and replicate in existing design models.

Decision Making

Dictionary-based Data Augmentation for Cross-Domain Neural Machine Translation

no code implementations6 Apr 2020 Wei Peng, Chongxuan Huang, Tian-Hao Li, Yun Chen, Qun Liu

Existing data augmentation approaches for neural machine translation (NMT) have predominantly relied on back-translating in-domain (IND) monolingual corpora.

Data Augmentation Machine Translation +2

DynaBERT: Dynamic BERT with Adaptive Width and Depth

3 code implementations NeurIPS 2020 Lu Hou, Zhiqi Huang, Lifeng Shang, Xin Jiang, Xiao Chen, Qun Liu

The pre-trained language models like BERT, though powerful in many natural language processing tasks, are both computation and memory expensive.

Language Modelling

Accurate Word Alignment Induction from Neural Machine Translation

1 code implementation EMNLP 2020 Yun Chen, Yang Liu, Guanhua Chen, Xin Jiang, Qun Liu

Shift-Att is an interpretation method that induces alignments from the attention weights of Transformer and does not require parameter update or architecture change.

Machine Translation Multi-Task Learning +2

Perturbed Masking: Parameter-free Probing for Analyzing and Interpreting BERT

1 code implementation ACL 2020 Zhiyong Wu, Yun Chen, Ben Kao, Qun Liu

However, this approach of evaluating a language model is undermined by the uncertainty of the amount of knowledge that is learned by the probe itself.

Dependency Parsing Language Modelling +2

Learning to Detect Unacceptable Machine Translations for Downstream Tasks

no code implementations8 May 2020 Meng Zhang, Xin Jiang, Yang Liu, Qun Liu

In this work, we put machine translation in a cross-lingual pipeline and introduce downstream tasks to define task-specific acceptability of machine translations.

Machine Translation Translation

TensorCoder: Dimension-Wise Attention via Tensor Representation for Natural Language Modeling

no code implementations28 Jul 2020 Shuai Zhang, Peng Zhang, Xindian Ma, Junqiu Wei, Ningning Wang, Qun Liu

Transformer has been widely-used in many Natural Language Processing (NLP) tasks and the scaled dot-product attention between tokens is a core module of Transformer.

Language Modelling Machine Translation +2

TernaryBERT: Distillation-aware Ultra-low Bit BERT

5 code implementations EMNLP 2020 Wei Zhang, Lu Hou, Yichun Yin, Lifeng Shang, Xiao Chen, Xin Jiang, Qun Liu

Transformer-based pre-training models like BERT have achieved remarkable performance in many natural language processing tasks. However, these models are both computation and memory expensive, hindering their deployment to resource-constrained devices.

Knowledge Distillation Quantization

SparTerm: Learning Term-based Sparse Representation for Fast Text Retrieval

no code implementations2 Oct 2020 Yang Bai, Xiaoguang Li, Gang Wang, Chaoliang Zhang, Lifeng Shang, Jun Xu, Zhaowei Wang, Fangshan Wang, Qun Liu

Term-based sparse representations dominate the first-stage text retrieval in industrial applications, due to its advantage in efficiency, interpretability, and exact term matching.

Language Modelling Retrieval +1

The Box is in the Pen: Evaluating Commonsense Reasoning in Neural Machine Translation

1 code implementation Findings of the Association for Computational Linguistics 2020 Jie He, Tao Wang, Deyi Xiong, Qun Liu

Our experiments and analyses demonstrate that neural machine translation performs poorly on commonsense reasoning of the three ambiguity types in terms of both reasoning accuracy ( 6 60. 1{\%}) and reasoning consistency (6 31{\%}).

Common Sense Reasoning Machine Translation +2

Know What You Don't Need: Single-Shot Meta-Pruning for Attention Heads

no code implementations7 Nov 2020 Zhengyan Zhang, Fanchao Qi, Zhiyuan Liu, Qun Liu, Maosong Sun

To measure the informativeness of attention heads, we train our Single-Shot Meta-Pruner (SMP) with a meta-learning paradigm aiming to maintain the distribution of text representations after pruning.

Informativeness Meta-Learning +1

From Unsupervised Machine Translation To Adversarial Text Generation

no code implementations10 Nov 2020 Ahmad Rashid, Alan Do-Omri, Md. Akmal Haidar, Qun Liu, Mehdi Rezagholizadeh

B-GAN is able to generate a distributed latent space representation which can be paired with an attention based decoder to generate fluent sentences.

Adversarial Text Text Generation +2

PPKE: Knowledge Representation Learning by Path-based Pre-training

no code implementations7 Dec 2020 Bin He, Di Zhou, Jing Xie, Jinghui Xiao, Xin Jiang, Qun Liu

Entities may have complex interactions in a knowledge graph (KG), such as multi-step relationships, which can be viewed as graph contextual information of the entities.

Link Prediction Representation Learning

Document Graph for Neural Machine Translation

no code implementations EMNLP 2021 Mingzhou Xu, Liangyou Li, Derek. F. Wong, Qun Liu, Lidia S. Chao

Previous works have shown that contextual information can improve the performance of neural machine translation (NMT).

Machine Translation NMT +1

KgPLM: Knowledge-guided Language Model Pre-training via Generative and Discriminative Learning

no code implementations7 Dec 2020 Bin He, Xin Jiang, Jinghui Xiao, Qun Liu

Recent studies on pre-trained language models have demonstrated their ability to capture factual knowledge and applications in knowledge-aware downstream tasks.

Language Modelling Machine Reading Comprehension +2

Improving Task-Agnostic BERT Distillation with Layer Mapping Search

no code implementations11 Dec 2020 Xiaoqi Jiao, Huating Chang, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, Qun Liu

Comprehensive experiments on the evaluation benchmarks demonstrate that 1) layer mapping strategy has a significant effect on task-agnostic BERT distillation and different layer mappings can result in quite different performances; 2) the optimal layer mapping strategy from the proposed search process consistently outperforms the other heuristic ones; 3) with the optimal layer mapping, our student model achieves state-of-the-art performance on the GLUE tasks.

Knowledge Distillation

ALP-KD: Attention-Based Layer Projection for Knowledge Distillation

no code implementations27 Dec 2020 Peyman Passban, Yimeng Wu, Mehdi Rezagholizadeh, Qun Liu

Knowledge distillation is considered as a training and compression strategy in which two neural networks, namely a teacher and a student, are coupled together during training.

Knowledge Distillation

Revisiting Robust Neural Machine Translation: A Transformer Case Study

no code implementations Findings (EMNLP) 2021 Peyman Passban, Puneeth S. M. Saladi, Qun Liu

There is a large body of work in the NMT literature on analyzing the behavior of conventional models for the problem of noise but Transformers are relatively understudied in this context.

Denoising Machine Translation +2

Better Robustness by More Coverage: Adversarial Training with Mixup Augmentation for Robust Fine-tuning

1 code implementation31 Dec 2020 Chenglei Si, Zhengyan Zhang, Fanchao Qi, Zhiyuan Liu, Yasheng Wang, Qun Liu, Maosong Sun

In this work, we propose a simple and effective method to cover a much larger proportion of the attack search space, called Adversarial and Mixup Data Augmentation (AMDA).

Adversarial Robustness Text Augmentation +2

On Position Embeddings in BERT

no code implementations ICLR 2021 Benyou Wang, Lifeng Shang, Christina Lioma, Xin Jiang, Hao Yang, Qun Liu, Jakob Grue Simonsen

Various Position Embeddings (PEs) have been proposed in Transformer based architectures~(e. g. BERT) to model word order.

General Classification Position +1

Training Multilingual Pre-trained Language Model with Byte-level Subwords

1 code implementation23 Jan 2021 Junqiu Wei, Qun Liu, Yinpeng Guo, Xin Jiang

The pre-trained language models have achieved great successes in various natural language understanding (NLU) tasks due to its capacity to capture the deep contextualized information in text by pre-training on large-scale corpora.

Language Modelling Natural Language Understanding

LightMBERT: A Simple Yet Effective Method for Multilingual BERT Distillation

no code implementations11 Mar 2021 Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, Qun Liu

The multilingual pre-trained language models (e. g, mBERT, XLM and XLM-R) have shown impressive performance on cross-lingual natural language understanding tasks.

Natural Language Understanding XLM-R

Reweighting Augmented Samples by Minimizing the Maximal Expected Loss

1 code implementation ICLR 2021 Mingyang Yi, Lu Hou, Lifeng Shang, Xin Jiang, Qun Liu, Zhi-Ming Ma

Inspired by adversarial training, we minimize this maximal expected loss (MMEL) and obtain a simple and interpretable closed-form solution: more attention should be paid to augmented samples with large loss values (i. e., harder examples).

Image Augmentation Image Classification +1

Dependency Graph-to-String Statistical Machine Translation

no code implementations20 Mar 2021 Liangyou Li, Andy Way, Qun Liu

We present graph-based translation models which translate source graphs into target strings.

Machine Translation Translation

An Approach to Improve Robustness of NLP Systems against ASR Errors

no code implementations25 Mar 2021 Tong Cui, Jinghui Xiao, Liangyou Li, Xin Jiang, Qun Liu

Speech-enabled systems typically first convert audio to text through an automatic speech recognition (ASR) model and then feed the text to downstream natural language processing (NLP) modules.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +5

From Fully Trained to Fully Random Embeddings: Improving Neural Machine Translation with Compact Word Embedding Tables

no code implementations18 Apr 2021 Krtin Kumar, Peyman Passban, Mehdi Rezagholizadeh, Yiu Sing Lau, Qun Liu

Embedding matrices are key components in neural natural language processing (NLP) models that are responsible to provide numerical representations of input tokens.\footnote{In this paper words and subwords are referred to as \textit{tokens} and the term \textit{embedding} only refers to embeddings of inputs.}

Machine Translation NMT +2

Extract then Distill: Efficient and Effective Task-Agnostic BERT Distillation

no code implementations24 Apr 2021 Cheng Chen, Yichun Yin, Lifeng Shang, Zhi Wang, Xin Jiang, Xiao Chen, Qun Liu

Task-agnostic knowledge distillation, a teacher-student framework, has been proved effective for BERT compression.

Knowledge Distillation

Dynamic Multi-Branch Layers for On-Device Neural Machine Translation

1 code implementation14 May 2021 Zhixing Tan, Zeyuan Yang, Meng Zhang, Qun Liu, Maosong Sun, Yang Liu

With the rapid development of artificial intelligence (AI), there is a trend in moving AI applications, such as neural machine translation (NMT), from cloud to mobile devices.

Machine Translation NMT +1

Improved OOD Generalization via Adversarial Training and Pre-training

no code implementations24 May 2021 Mingyang Yi, Lu Hou, Jiacheng Sun, Lifeng Shang, Xin Jiang, Qun Liu, Zhi-Ming Ma

In this paper, after defining OOD generalization via Wasserstein distance, we theoretically show that a model robust to input perturbation generalizes well on OOD data.

Image Classification Natural Language Understanding

Multilingual Speech Translation with Unified Transformer: Huawei Noah's Ark Lab at IWSLT 2021

no code implementations1 Jun 2021 Xingshan Zeng, Liangyou Li, Qun Liu

We use a unified transformer architecture for our MultiST model, so that the data from different modalities (i. e., speech and text) and different tasks (i. e., Speech Recognition, Machine Translation, and Speech Translation) can be exploited to enhance the model's ability.

Data Augmentation Machine Translation +4

Sub-Character Tokenization for Chinese Pretrained Language Models

2 code implementations1 Jun 2021 Chenglei Si, Zhengyan Zhang, Yingfa Chen, Fanchao Qi, Xiaozhi Wang, Zhiyuan Liu, Yasheng Wang, Qun Liu, Maosong Sun

2) Pronunciation-based SubChar tokenizers can encode Chinese homophones into the same transliteration sequences and produce the same tokenization output, hence being robust to homophone typos.

Chinese Word Segmentation Computational Efficiency +2

Learning Multilingual Representation for Natural Language Understanding with Enhanced Cross-Lingual Supervision

no code implementations9 Jun 2021 Yinpeng Guo, Liangyou Li, Xin Jiang, Qun Liu

Recently, pre-training multilingual language models has shown great potential in learning multilingual representation, a crucial topic of natural language processing.

Natural Language Understanding

RealTranS: End-to-End Simultaneous Speech Translation with Convolutional Weighted-Shrinking Transformer

no code implementations Findings (ACL) 2021 Xingshan Zeng, Liangyou Li, Qun Liu

To bridge the modality gap between speech and text, RealTranS gradually downsamples the input speech with interleaved convolution and unidirectional Transformer layers for acoustic modeling, and then maps speech features into text space with a weighted-shrinking operation and a semantic encoder.

Translation

A Mutual Information Maximization Approach for the Spurious Solution Problem in Weakly Supervised Question Answering

1 code implementation ACL 2021 Zhihong Shao, Lifeng Shang, Qun Liu, Minlie Huang

This setting gives rise to the spurious solution problem: there may exist many spurious solutions that coincidentally derive the correct answer, but training on such solutions can hurt model performance (e. g., producing wrong solutions or answers).

Question Answering

AutoTinyBERT: Automatic Hyper-parameter Optimization for Efficient Pre-trained Language Models

1 code implementation ACL 2021 Yichun Yin, Cheng Chen, Lifeng Shang, Xin Jiang, Xiao Chen, Qun Liu

Specifically, we carefully design the techniques of one-shot learning and the search space to provide an adaptive and efficient development way of tiny PLMs for various latency constraints.

Neural Architecture Search One-Shot Learning

TGEA: An Error-Annotated Dataset and Benchmark Tasks for TextGeneration from Pretrained Language Models

no code implementations ACL 2021 Jie He, Bo Peng, Yi Liao, Qun Liu, Deyi Xiong

Each error is hence manually labeled with comprehensive annotations, including the span of the error, the associated span, minimal correction to the error, the type of the error, and rationale behind the error.

Common Sense Reasoning Text Generation

GhostBERT: Generate More Features with Cheap Operations for BERT

no code implementations ACL 2021 Zhiqi Huang, Lu Hou, Lifeng Shang, Xin Jiang, Xiao Chen, Qun Liu

Transformer-based pre-trained language models like BERT, though powerful in many tasks, are expensive in both memory and computation, due to their large number of parameters.

Uncertainty-Aware Balancing for Multilingual and Multi-Domain Neural Machine Translation Training

no code implementations EMNLP 2021 Minghao Wu, Yitong Li, Meng Zhang, Liangyou Li, Gholamreza Haffari, Qun Liu

In this work, we propose an approach, MultiUAT, that dynamically adjusts the training data usage based on the model's uncertainty on a small set of trusted clean data for multi-corpus machine translation.

Machine Translation Translation

NumGPT: Improving Numeracy Ability of Generative Pre-trained Models

no code implementations7 Sep 2021 Zhihua Jin, Xin Jiang, Xingbo Wang, Qun Liu, Yong Wang, Xiaozhe Ren, Huamin Qu

However, those models do not consider the numerical properties of numbers and cannot perform robustly on numerical reasoning tasks (e. g., math word problems and measurement estimation).

Math

CINS: Comprehensive Instruction for Few-shot Learning in Task-oriented Dialog Systems

no code implementations10 Sep 2021 Fei Mi, Yitong Li, Yasheng Wang, Xin Jiang, Qun Liu

As labeling cost for different modules in task-oriented dialog (ToD) systems is high, a major challenge in practice is to learn different tasks with the least amount of labeled data.

dialog state tracking Few-Shot Learning +3

UniMS: A Unified Framework for Multimodal Summarization with Knowledge Distillation

no code implementations13 Sep 2021 Zhengkun Zhang, Xiaojun Meng, Yasheng Wang, Xin Jiang, Qun Liu, Zhenglu Yang

Specially, we adopt knowledge distillation from a vision-language pretrained model to improve image selection, which avoids any requirement on the existence and quality of image captions.

Abstractive Text Summarization Image Captioning +2

Improving Unsupervised Question Answering via Summarization-Informed Question Generation

no code implementations EMNLP 2021 Chenyang Lyu, Lifeng Shang, Yvette Graham, Jennifer Foster, Xin Jiang, Qun Liu

Template-based QG uses linguistically-informed heuristics to transform declarative sentences into interrogatives, whereas supervised QG uses existing Question Answering (QA) datasets to train a system to generate a question given a passage and an answer.

Dependency Parsing named-entity-recognition +8

Multi-Semantic Image Recognition Model and Evaluating Index for explaining the deep learning models

no code implementations28 Sep 2021 Qianmengke Zhao, Ye Wang, Qun Liu

Although deep learning models are powerful among various applications, most deep learning models are still a black box, lacking verifiability and interpretability, which means the decision-making process that human beings cannot understand.

Decision Making Image Classification

Speech-MLP: a simple MLP architecture for speech processing

no code implementations29 Sep 2021 Chao Xing, Dong Wang, LiRong Dai, Qun Liu, Anderson Avila

Overparameterized transformer-based architectures have shown remarkable performance in recent years, achieving state-of-the-art results in speech processing tasks such as speech recognition, speech synthesis, keyword spotting, and speech enhancement et al.

Keyword Spotting Speech Enhancement +3

bert2BERT: Towards Reusable Pretrained Language Models

no code implementations ACL 2022 Cheng Chen, Yichun Yin, Lifeng Shang, Xin Jiang, Yujia Qin, Fengyu Wang, Zhi Wang, Xiao Chen, Zhiyuan Liu, Qun Liu

However, large language model pre-training costs intensive computational resources and most of the models are trained from scratch without reusing the existing pre-trained models, which is wasteful.

Language Modelling Large Language Model

JABER and SABER: Junior and Senior Arabic BERt

1 code implementation8 Dec 2021 Abbas Ghaddar, Yimeng Wu, Ahmad Rashid, Khalil Bibi, Mehdi Rezagholizadeh, Chao Xing, Yasheng Wang, Duan Xinyu, Zhefeng Wang, Baoxing Huai, Xin Jiang, Qun Liu, Philippe Langlais

Language-specific pre-trained models have proven to be more accurate than multilingual ones in a monolingual evaluation setting, Arabic is no exception.

Language Modelling NER

Cannot find the paper you are looking for? You can Submit a new open access paper.