Search Results for author: Qun Liu

Found 250 papers, 48 papers with code

MTRec: Multi-Task Learning over BERT for News Recommendation

no code implementations Findings (ACL) 2022 Qiwei Bi, Jian Li, Lifeng Shang, Xin Jiang, Qun Liu, Hanfang Yang

With the adoption of large pre-trained models like BERT in news recommendation, the above way to incorporate multi-field information may encounter challenges: the shallow feature encoding to compress the category and entity information is not compatible with the deep BERT encoding.

Multi-Task Learning News Recommendation

Controlled Text Generation Using Dictionary Prior in Variational Autoencoders

no code implementations Findings (ACL) 2022 Xianghong Fang, Jian Li, Lifeng Shang, Xin Jiang, Qun Liu, Dit-yan Yeung

While variational autoencoders (VAEs) have been widely applied in text generation tasks, they are troubled by two challenges: insufficient representation capacity and poor controllability.

Contrastive Learning Language Modelling +2

End-to-End Simultaneous Speech Translation with Pretraining and Distillation: Huawei Noah’s System for AutoSimTranS 2022

no code implementations NAACL (AutoSimTrans) 2022 Xingshan Zeng, Pengfei Li, Liangyou Li, Qun Liu

This paper describes the system submitted to AutoSimTrans 2022 from Huawei Noah’s Ark Lab, which won the first place in the audio input track of the Chinese-English translation task.

Knowledge Distillation NMT +1

ClusterFormer: Neural Clustering Attention for Efficient and Effective Transformer

no code implementations ACL 2022 Ningning Wang, Guobing Gan, Peng Zhang, Shuai Zhang, Junqiu Wei, Qun Liu, Xin Jiang

Other sparse methods use clustering patterns to select words, but the clustering process is separate from the training process of the target task, which causes a decrease in effectiveness.

Machine Translation Natural Language Inference +3

Huawei AARC’s Submissions to the WMT21 Biomedical Translation Task: Domain Adaption from a Practical Perspective

no code implementations WMT (EMNLP) 2021 Weixuan Wang, Wei Peng, Xupeng Meng, Qun Liu

This paper describes Huawei Artificial Intelligence Application Research Center’s neural machine translation systems and submissions to the WMT21 biomedical translation shared task.

Domain Adaptation Machine Translation +1

Multilingual Speech Translation with Unified Transformer: Huawei Noah’s Ark Lab at IWSLT 2021

no code implementations ACL (IWSLT) 2021 Xingshan Zeng, Liangyou Li, Qun Liu

We use a unified transformer architecture for our MultiST model, so that the data from different modalities (i. e., speech and text) and different tasks (i. e., Speech Recognition, Machine Translation, and Speech Translation) can be exploited to enhance the model’s ability.

Data Augmentation Machine Translation +4

Chinese WPLC: A Chinese Dataset for Evaluating Pretrained Language Models on Word Prediction Given Long-Range Context

no code implementations EMNLP 2021 Huibin Ge, Chenxi Sun, Deyi Xiong, Qun Liu

Experiment results show that the Chinese pretrained language model PanGu-\alpha is 45 points behind human in terms of top-1 word prediction accuracy, indicating that Chinese WPLC is a challenging dataset.

Language Modelling Pretrained Language Models

Self-Supervised Quality Estimation for Machine Translation

no code implementations EMNLP 2021 Yuanhang Zheng, Zhixing Tan, Meng Zhang, Mieradilijiang Maimaiti, Huanbo Luan, Maosong Sun, Qun Liu, Yang Liu

Quality estimation (QE) of machine translation (MT) aims to evaluate the quality of machine-translated sentences without references and is important in practical applications of MT.

Machine Translation Translation

Neural Machine Translation with Heterogeneous Topic Knowledge Embeddings

1 code implementation EMNLP 2021 Weixuan Wang, Wei Peng, Meng Zhang, Qun Liu

Neural Machine Translation (NMT) has shown a strong ability to utilize local context to disambiguate the meaning of words.

Machine Translation NMT +2

WL-Align: Weisfeiler-Lehman Relabeling for Aligning Users across Networks via Regularized Representation Learning

1 code implementation29 Dec 2022 Li Liu, Penggang Chen, Xin Li, William K. Cheung, Youmin Zhang, Qun Liu, Guoyin Wang

Aligning users across networks using graph representation learning has been found effective where the alignment is accomplished in a low-dimensional embedding space.

Graph Representation Learning

Wukong-Reader: Multi-modal Pre-training for Fine-grained Visual Document Understanding

no code implementations19 Dec 2022 Haoli Bai, Zhiguang Liu, Xiaojun Meng, Wentao Li, Shuang Liu, Nian Xie, Rongfu Zheng, Liangwei Wang, Lu Hou, Jiansheng Wei, Xin Jiang, Qun Liu

While various vision-language pre-training objectives are studied in existing solutions, the document textline, as an intrinsic granularity in VDU, has seldom been explored so far.

Contrastive Learning Optical Character Recognition +1

AdaTranS: Adapting with Boundary-based Shrinking for End-to-End Speech Translation

no code implementations17 Dec 2022 Xingshan Zeng, Liangyou Li, Qun Liu

To alleviate the data scarcity problem in End-to-end speech translation (ST), pre-training on data for speech recognition and machine translation is considered as an important technique.

Machine Translation speech-recognition +2

Retrieval-based Disentanglement with Distant Supervision

no code implementations15 Dec 2022 Jiawei Zhou, Xiaoguang Li, Lifeng Shang, Xin Jiang, Qun Liu, Lei Chen

Disentangled representation learning remains challenging as ground truth factors of variation do not naturally exist.

Cross-Modal Retrieval Disentanglement +2

G-MAP: General Memory-Augmented Pre-trained Language Model for Domain Tasks

no code implementations7 Dec 2022 Zhongwei Wan, Yichun Yin, Wei zhang, Jiaxin Shi, Lifeng Shang, Guangyong Chen, Xin Jiang, Qun Liu

Recently, domain-specific PLMs have been proposed to boost the task performance of specific domains (e. g., biomedical and computer science) by continuing to pre-train general PLMs with domain-specific corpora.

General Knowledge Language Modelling +3

SongRewriter: A Chinese Song Rewriting System with Controllable Content and Rhyme Scheme

no code implementations28 Nov 2022 Yusen Sun, Liangyou Li, Qun Liu, Dit-yan Yeung

Although lyrics generation has achieved significant progress in recent years, it has limited practical applications because the generated lyrics cannot be performed without composing compatible melodies.

Lexicon-injected Semantic Parsing for Task-Oriented Dialog

no code implementations26 Nov 2022 Xiaojun Meng, Wenlin Dai, Yasheng Wang, Baojun Wang, Zhiyong Wu, Xin Jiang, Qun Liu

Then we present a novel lexicon-injected semantic parser, which collects slot labels of tree representation as a lexicon, and injects lexical features to the span representation of parser.

Semantic Parsing

FPT: Improving Prompt Tuning Efficiency via Progressive Training

1 code implementation13 Nov 2022 Yufei Huang, Yujia Qin, Huadong Wang, Yichun Yin, Maosong Sun, Zhiyuan Liu, Qun Liu

Inspired by these observations, we propose Fast Prompt Tuning (FPT), which starts by conducting PT using a small-scale partial PLM, and then progressively expands its depth and width until the full-model size.

Pre-training Language Models with Deterministic Factual Knowledge

no code implementations20 Oct 2022 Shaobo Li, Xiaoguang Li, Lifeng Shang, Chengjie Sun, Bingquan Liu, Zhenzhou Ji, Xin Jiang, Qun Liu

Further experiments on question-answering datasets show that trying to learn a deterministic relationship with the proposed methods can also help other knowledge-intensive tasks.

Knowledge Probing Question Answering

Making a MIRACL: Multilingual Information Retrieval Across a Continuum of Languages

1 code implementation18 Oct 2022 Xinyu Zhang, Nandan Thakur, Odunayo Ogundepo, Ehsan Kamalloo, David Alfonso-Hermelo, Xiaoguang Li, Qun Liu, Mehdi Rezagholizadeh, Jimmy Lin

MIRACL (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual dataset we have built for the WSDM 2023 Cup challenge that focuses on ad hoc retrieval across 18 different languages, which collectively encompass over three billion native speakers around the world.

Information Retrieval Retrieval

ShortcutLens: A Visual Analytics Approach for Exploring Shortcuts in Natural Language Understanding Dataset

no code implementations17 Aug 2022 Zhihua Jin, Xingbo Wang, Furui Cheng, Chunhui Sun, Qun Liu, Huamin Qu

Since shortcuts vary in coverage, productivity, and semantic meaning, it is challenging for NLU experts to systematically understand and avoid them when creating benchmark datasets.

Natural Language Understanding

PanGu-Coder: Program Synthesis with Function-Level Language Modeling

no code implementations22 Jul 2022 Fenia Christopoulou, Gerasimos Lampouras, Milan Gritta, Guchun Zhang, Yinpeng Guo, Zhongqi Li, Qi Zhang, Meng Xiao, Bo Shen, Lin Li, Hao Yu, Li Yan, Pingyi Zhou, Xin Wang, Yuchi Ma, Ignacio Iacobacci, Yasheng Wang, Guangtai Liang, Jiansheng Wei, Xin Jiang, Qianxiang Wang, Qun Liu

We present PanGu-Coder, a pretrained decoder-only language model adopting the PanGu-Alpha architecture for text-to-code generation, i. e. the synthesis of programming language solutions given a natural language problem description.

Code Generation Language Modelling +2

FreeTransfer-X: Safe and Label-Free Cross-Lingual Transfer from Off-the-Shelf Models

no code implementations Findings (NAACL) 2022 Yinpeng Guo, Liangyou Li, Xin Jiang, Qun Liu

However, labeled cross-lingual corpus is expensive or even inaccessible, especially in the fields where labels are private, such as diagnostic results of symptoms in medicine and user profiles in business.

Cross-Lingual Transfer Knowledge Distillation +3

PERT: A New Solution to Pinyin to Character Conversion Task

1 code implementation24 May 2022 Jinghui Xiao, Qun Liu, Xin Jiang, Yuanfeng Xiong, Haiteng Wu, Zhe Zhang

Pinyin to Character conversion (P2C) task is the key task of Input Method Engine (IME) in commercial input software for Asian languages, such as Chinese, Japanese, Thai language and so on.

Language Modelling

Exploring Extreme Parameter Compression for Pre-trained Language Models

1 code implementation ICLR 2022 Yuxin Ren, Benyou Wang, Lifeng Shang, Xin Jiang, Qun Liu

A tiny version achieves $96. 7\%$ performance of BERT-base with $ {1}/{48} $ encoder parameters (i. e., less than 2M parameters excluding the embedding layer) and $2. 7 \times$ faster on inference.

Knowledge Distillation Tensor Decomposition

UTC: A Unified Transformer with Inter-Task Contrastive Learning for Visual Dialog

no code implementations CVPR 2022 Cheng Chen, Yudong Zhu, Zhenshan Tan, Qingrong Cheng, Xin Jiang, Qun Liu, Xiaodong Gu

In this paper, we propose a contrastive learning-based framework UTC to unify and facilitate both discriminative and generative tasks in visual dialog with a single model.

Contrastive Learning Representation Learning +1

How Pre-trained Language Models Capture Factual Knowledge? A Causal-Inspired Analysis

no code implementations Findings (ACL) 2022 Shaobo Li, Xiaoguang Li, Lifeng Shang, Zhenhua Dong, Chengjie Sun, Bingquan Liu, Zhenzhou Ji, Xin Jiang, Qun Liu

We check the words that have three typical associations with the missing words: knowledge-dependent, positionally close, and highly co-occurred.

Compression of Generative Pre-trained Language Models via Quantization

no code implementations ACL 2022 Chaofan Tao, Lu Hou, Wei zhang, Lifeng Shang, Xin Jiang, Qun Liu, Ping Luo, Ngai Wong

We find that previous quantization methods fail on generative tasks due to the \textit{homogeneous word embeddings} caused by reduced capacity, and \textit{varied distribution of weights}.

Model Compression Quantization +1

Triangular Transfer: Freezing the Pivot for Triangular Machine Translation

no code implementations ACL 2022 Meng Zhang, Liangyou Li, Qun Liu

Triangular machine translation is a special case of low-resource machine translation where the language pair of interest has limited parallel data, but both languages have abundant parallel data with a pivot language.

Language Modelling Machine Translation +2

Hyperlink-induced Pre-training for Passage Retrieval in Open-domain Question Answering

1 code implementation ACL 2022 Jiawei Zhou, Xiaoguang Li, Lifeng Shang, Lan Luo, Ke Zhan, Enrui Hu, Xinyu Zhang, Hao Jiang, Zhao Cao, Fan Yu, Xin Jiang, Qun Liu, Lei Chen

To alleviate the data scarcity problem in training question answering systems, recent works propose additional intermediate pre-training for dense passage retrieval (DPR).

Open-Domain Question Answering Passage Retrieval +1

Enabling Multimodal Generation on CLIP via Vision-Language Knowledge Distillation

no code implementations Findings (ACL) 2022 Wenliang Dai, Lu Hou, Lifeng Shang, Xin Jiang, Qun Liu, Pascale Fung

Furthermore, the original textual language understanding and generation ability of the PLM is maintained after VLKD, which makes our model versatile for both multimodal and unimodal tasks.

Image Captioning Knowledge Distillation +4

Achieving Reliable Human Assessment of Open-Domain Dialogue Systems

1 code implementation ACL 2022 Tianbo Ji, Yvette Graham, Gareth J. F. Jones, Chenyang Lyu, Qun Liu

Answering the distress call of competitions that have emphasized the urgent need for better evaluation techniques in dialogue, we present the successful development of human evaluation that is highly reliable while still remaining feasible and low cost.

Dialogue Evaluation

Compilable Neural Code Generation with Compiler Feedback

no code implementations Findings (ACL) 2022 Xin Wang, Yasheng Wang, Yao Wan, Fei Mi, Yitong Li, Pingyi Zhou, Jin Liu, Hao Wu, Xin Jiang, Qun Liu

Automatically generating compilable programs with (or without) natural language descriptions has always been a touchstone problem for computational linguistics and automated software engineering.

Code Completion Code Generation +3

HyperPELT: Unified Parameter-Efficient Language Model Tuning for Both Language and Vision-and-Language Tasks

no code implementations8 Mar 2022 Zhengkun Zhang, Wenya Guo, Xiaojun Meng, Yasheng Wang, Yadao Wang, Xin Jiang, Qun Liu, Zhenglu Yang

In this paper, we design a novel unified parameter-efficient transfer learning framework that works effectively on both pure language and V&L tasks.

Language Modelling Multi-Task Learning

Towards Identifying Social Bias in Dialog Systems: Frame, Datasets, and Benchmarks

1 code implementation16 Feb 2022 Jingyan Zhou, Jiawen Deng, Fei Mi, Yitong Li, Yasheng Wang, Minlie Huang, Xin Jiang, Qun Liu, Helen Meng

The research of open-domain dialog systems has been greatly prospered by neural models trained on large-scale corpora, however, such corpora often introduce various safety problems (e. g., offensive languages, biases, and toxic behaviors) that significantly hinder the deployment of dialog systems in practice.

Bias Detection Open-Domain Dialog

Pan More Gold from the Sand: Refining Open-domain Dialogue Training with Noisy Self-Retrieval Generation

no code implementations COLING 2022 Yihe Wang, Yitong Li, Yasheng Wang, Fei Mi, Pingyi Zhou, Xin Wang, Jin Liu, Xin Jiang, Qun Liu

Experiments over publicly available datasets demonstrate that our method can help models generate better responses, even such training data are usually impressed as low-quality data.

Dialogue Generation Retrieval

JABER and SABER: Junior and Senior Arabic BERt

1 code implementation8 Dec 2021 Abbas Ghaddar, Yimeng Wu, Ahmad Rashid, Khalil Bibi, Mehdi Rezagholizadeh, Chao Xing, Yasheng Wang, Duan Xinyu, Zhefeng Wang, Baoxing Huai, Xin Jiang, Qun Liu, Philippe Langlais

Language-specific pre-trained models have proven to be more accurate than multilingual ones in a monolingual evaluation setting, Arabic is no exception.

Language Modelling NER

bert2BERT: Towards Reusable Pretrained Language Models

no code implementations ACL 2022 Cheng Chen, Yichun Yin, Lifeng Shang, Xin Jiang, Yujia Qin, Fengyu Wang, Zhi Wang, Xiao Chen, Zhiyuan Liu, Qun Liu

However, large language model pre-training costs intensive computational resources and most of the models are trained from scratch without reusing the existing pre-trained models, which is wasteful.

Language Modelling Pretrained Language Models

Speech-MLP: a simple MLP architecture for speech processing

no code implementations29 Sep 2021 Chao Xing, Dong Wang, LiRong Dai, Qun Liu, Anderson Avila

Overparameterized transformer-based architectures have shown remarkable performance in recent years, achieving state-of-the-art results in speech processing tasks such as speech recognition, speech synthesis, keyword spotting, and speech enhancement et al.

Keyword Spotting Speech Enhancement +3

Multi-Semantic Image Recognition Model and Evaluating Index for explaining the deep learning models

no code implementations28 Sep 2021 Qianmengke Zhao, Ye Wang, Qun Liu

Although deep learning models are powerful among various applications, most deep learning models are still a black box, lacking verifiability and interpretability, which means the decision-making process that human beings cannot understand.

Decision Making Image Classification

Improving Unsupervised Question Answering via Summarization-Informed Question Generation

no code implementations EMNLP 2021 Chenyang Lyu, Lifeng Shang, Yvette Graham, Jennifer Foster, Xin Jiang, Qun Liu

Template-based QG uses linguistically-informed heuristics to transform declarative sentences into interrogatives, whereas supervised QG uses existing Question Answering (QA) datasets to train a system to generate a question given a passage and an answer.

Dependency Parsing named-entity-recognition +7

UniMS: A Unified Framework for Multimodal Summarization with Knowledge Distillation

no code implementations13 Sep 2021 Zhengkun Zhang, Xiaojun Meng, Yasheng Wang, Xin Jiang, Qun Liu, Zhenglu Yang

Specially, we adopt knowledge distillation from a vision-language pretrained model to improve image selection, which avoids any requirement on the existence and quality of image captions.

Abstractive Text Summarization Image Captioning +2

CINS: Comprehensive Instruction for Few-shot Learning in Task-oriented Dialog Systems

no code implementations10 Sep 2021 Fei Mi, Yitong Li, Yasheng Wang, Xin Jiang, Qun Liu

As labeling cost for different modules in task-oriented dialog (ToD) systems is high, a major challenge in practice is to learn different tasks with the least amount of labeled data.

dialog state tracking Few-Shot Learning +3

NumGPT: Improving Numeracy Ability of Generative Pre-trained Models

no code implementations7 Sep 2021 Zhihua Jin, Xin Jiang, Xingbo Wang, Qun Liu, Yong Wang, Xiaozhe Ren, Huamin Qu

However, those models do not consider the numerical properties of numbers and cannot perform robustly on numerical reasoning tasks (e. g., math word problems and measurement estimation).

Uncertainty-Aware Balancing for Multilingual and Multi-Domain Neural Machine Translation Training

no code implementations EMNLP 2021 Minghao Wu, Yitong Li, Meng Zhang, Liangyou Li, Gholamreza Haffari, Qun Liu

In this work, we propose an approach, MultiUAT, that dynamically adjusts the training data usage based on the model's uncertainty on a small set of trusted clean data for multi-corpus machine translation.

Machine Translation Translation

TGEA: An Error-Annotated Dataset and Benchmark Tasks for TextGeneration from Pretrained Language Models

no code implementations ACL 2021 Jie He, Bo Peng, Yi Liao, Qun Liu, Deyi Xiong

Each error is hence manually labeled with comprehensive annotations, including the span of the error, the associated span, minimal correction to the error, the type of the error, and rationale behind the error.

Common Sense Reasoning Pretrained Language Models +1

GhostBERT: Generate More Features with Cheap Operations for BERT

no code implementations ACL 2021 Zhiqi Huang, Lu Hou, Lifeng Shang, Xin Jiang, Xiao Chen, Qun Liu

Transformer-based pre-trained language models like BERT, though powerful in many tasks, are expensive in both memory and computation, due to their large number of parameters.

AutoTinyBERT: Automatic Hyper-parameter Optimization for Efficient Pre-trained Language Models

1 code implementation ACL 2021 Yichun Yin, Cheng Chen, Lifeng Shang, Xin Jiang, Xiao Chen, Qun Liu

Specifically, we carefully design the techniques of one-shot learning and the search space to provide an adaptive and efficient development way of tiny PLMs for various latency constraints.

Neural Architecture Search One-Shot Learning

A Mutual Information Maximization Approach for the Spurious Solution Problem in Weakly Supervised Question Answering

1 code implementation ACL 2021 Zhihong Shao, Lifeng Shang, Qun Liu, Minlie Huang

This setting gives rise to the spurious solution problem: there may exist many spurious solutions that coincidentally derive the correct answer, but training on such solutions can hurt model performance (e. g., producing wrong solutions or answers).

Question Answering

Learning Multilingual Representation for Natural Language Understanding with Enhanced Cross-Lingual Supervision

no code implementations9 Jun 2021 Yinpeng Guo, Liangyou Li, Xin Jiang, Qun Liu

Recently, pre-training multilingual language models has shown great potential in learning multilingual representation, a crucial topic of natural language processing.

Natural Language Understanding

RealTranS: End-to-End Simultaneous Speech Translation with Convolutional Weighted-Shrinking Transformer

no code implementations Findings (ACL) 2021 Xingshan Zeng, Liangyou Li, Qun Liu

To bridge the modality gap between speech and text, RealTranS gradually downsamples the input speech with interleaved convolution and unidirectional Transformer layers for acoustic modeling, and then maps speech features into text space with a weighted-shrinking operation and a semantic encoder.

Translation

Sub-Character Tokenization for Chinese Pretrained Language Models

no code implementations1 Jun 2021 Chenglei Si, Zhengyan Zhang, Yingfa Chen, Fanchao Qi, Xiaozhi Wang, Zhiyuan Liu, Yasheng Wang, Qun Liu, Maosong Sun

2) Pronunciation-based SubChar tokenizers can encode Chinese homophones into the same transliteration sequences and produce the same tokenization output, hence being robust to all homophone typos.

Chinese Word Segmentation Language Modelling +2

Multilingual Speech Translation with Unified Transformer: Huawei Noah's Ark Lab at IWSLT 2021

no code implementations1 Jun 2021 Xingshan Zeng, Liangyou Li, Qun Liu

We use a unified transformer architecture for our MultiST model, so that the data from different modalities (i. e., speech and text) and different tasks (i. e., Speech Recognition, Machine Translation, and Speech Translation) can be exploited to enhance the model's ability.

Data Augmentation Machine Translation +4

Improved OOD Generalization via Adversarial Training and Pre-training

no code implementations24 May 2021 Mingyang Yi, Lu Hou, Jiacheng Sun, Lifeng Shang, Xin Jiang, Qun Liu, Zhi-Ming Ma

In this paper, after defining OOD generalization via Wasserstein distance, we theoretically show that a model robust to input perturbation generalizes well on OOD data.

Image Classification Natural Language Understanding

Dynamic Multi-Branch Layers for On-Device Neural Machine Translation

1 code implementation14 May 2021 Zhixing Tan, Zeyuan Yang, Meng Zhang, Qun Liu, Maosong Sun, Yang Liu

With the rapid development of artificial intelligence (AI), there is a trend in moving AI applications, such as neural machine translation (NMT), from cloud to mobile devices.

Machine Translation NMT +1

Extract then Distill: Efficient and Effective Task-Agnostic BERT Distillation

no code implementations24 Apr 2021 Cheng Chen, Yichun Yin, Lifeng Shang, Zhi Wang, Xin Jiang, Xiao Chen, Qun Liu

Task-agnostic knowledge distillation, a teacher-student framework, has been proved effective for BERT compression.

Knowledge Distillation

From Fully Trained to Fully Random Embeddings: Improving Neural Machine Translation with Compact Word Embedding Tables

no code implementations18 Apr 2021 Krtin Kumar, Peyman Passban, Mehdi Rezagholizadeh, Yiu Sing Lau, Qun Liu

Embedding matrices are key components in neural natural language processing (NLP) models that are responsible to provide numerical representations of input tokens.\footnote{In this paper words and subwords are referred to as \textit{tokens} and the term \textit{embedding} only refers to embeddings of inputs.}

Machine Translation NMT +2

An Approach to Improve Robustness of NLP Systems against ASR Errors

no code implementations25 Mar 2021 Tong Cui, Jinghui Xiao, Liangyou Li, Xin Jiang, Qun Liu

Speech-enabled systems typically first convert audio to text through an automatic speech recognition (ASR) model and then feed the text to downstream natural language processing (NLP) modules.

Automatic Speech Recognition Data Augmentation +4

Dependency Graph-to-String Statistical Machine Translation

no code implementations20 Mar 2021 Liangyou Li, Andy Way, Qun Liu

We present graph-based translation models which translate source graphs into target strings.

Machine Translation Translation

Reweighting Augmented Samples by Minimizing the Maximal Expected Loss

no code implementations ICLR 2021 Mingyang Yi, Lu Hou, Lifeng Shang, Xin Jiang, Qun Liu, Zhi-Ming Ma

Inspired by adversarial training, we minimize this maximal expected loss (MMEL) and obtain a simple and interpretable closed-form solution: more attention should be paid to augmented samples with large loss values (i. e., harder examples).

Image Augmentation Image Classification +1

LightMBERT: A Simple Yet Effective Method for Multilingual BERT Distillation

no code implementations11 Mar 2021 Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, Qun Liu

The multilingual pre-trained language models (e. g, mBERT, XLM and XLM-R) have shown impressive performance on cross-lingual natural language understanding tasks.

Natural Language Understanding XLM-R

Training Multilingual Pre-trained Language Model with Byte-level Subwords

1 code implementation23 Jan 2021 Junqiu Wei, Qun Liu, Yinpeng Guo, Xin Jiang

The pre-trained language models have achieved great successes in various natural language understanding (NLU) tasks due to its capacity to capture the deep contextualized information in text by pre-training on large-scale corpora.

Language Modelling Natural Language Understanding

On Position Embeddings in BERT

no code implementations ICLR 2021 Benyou Wang, Lifeng Shang, Christina Lioma, Xin Jiang, Hao Yang, Qun Liu, Jakob Grue Simonsen

Various Position Embeddings (PEs) have been proposed in Transformer based architectures~(e. g. BERT) to model word order.

General Classification Translation

Revisiting Robust Neural Machine Translation: A Transformer Case Study

no code implementations Findings (EMNLP) 2021 Peyman Passban, Puneeth S. M. Saladi, Qun Liu

There is a large body of work in the NMT literature on analyzing the behavior of conventional models for the problem of noise but Transformers are relatively understudied in this context.

Denoising Machine Translation +2

Better Robustness by More Coverage: Adversarial Training with Mixup Augmentation for Robust Fine-tuning

1 code implementation31 Dec 2020 Chenglei Si, Zhengyan Zhang, Fanchao Qi, Zhiyuan Liu, Yasheng Wang, Qun Liu, Maosong Sun

In this work, we propose a simple and effective method to cover a much larger proportion of the attack search space, called Adversarial and Mixup Data Augmentation (AMDA).

Adversarial Robustness Pretrained Language Models +3

ALP-KD: Attention-Based Layer Projection for Knowledge Distillation

no code implementations27 Dec 2020 Peyman Passban, Yimeng Wu, Mehdi Rezagholizadeh, Qun Liu

Knowledge distillation is considered as a training and compression strategy in which two neural networks, namely a teacher and a student, are coupled together during training.

Knowledge Distillation

Improving Task-Agnostic BERT Distillation with Layer Mapping Search

no code implementations11 Dec 2020 Xiaoqi Jiao, Huating Chang, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, Qun Liu

Comprehensive experiments on the evaluation benchmarks demonstrate that 1) layer mapping strategy has a significant effect on task-agnostic BERT distillation and different layer mappings can result in quite different performances; 2) the optimal layer mapping strategy from the proposed search process consistently outperforms the other heuristic ones; 3) with the optimal layer mapping, our student model achieves state-of-the-art performance on the GLUE tasks.

Knowledge Distillation

PPKE: Knowledge Representation Learning by Path-based Pre-training

no code implementations7 Dec 2020 Bin He, Di Zhou, Jing Xie, Jinghui Xiao, Xin Jiang, Qun Liu

Entities may have complex interactions in a knowledge graph (KG), such as multi-step relationships, which can be viewed as graph contextual information of the entities.

Link Prediction Representation Learning

KgPLM: Knowledge-guided Language Model Pre-training via Generative and Discriminative Learning

no code implementations7 Dec 2020 Bin He, Xin Jiang, Jinghui Xiao, Qun Liu

Recent studies on pre-trained language models have demonstrated their ability to capture factual knowledge and applications in knowledge-aware downstream tasks.

Language Modelling Machine Reading Comprehension +2

Document Graph for Neural Machine Translation

no code implementations EMNLP 2021 Mingzhou Xu, Liangyou Li, Derek. F. Wong, Qun Liu, Lidia S. Chao

Previous works have shown that contextual information can improve the performance of neural machine translation (NMT).

Machine Translation NMT +1

From Unsupervised Machine Translation To Adversarial Text Generation

no code implementations10 Nov 2020 Ahmad Rashid, Alan Do-Omri, Md. Akmal Haidar, Qun Liu, Mehdi Rezagholizadeh

B-GAN is able to generate a distributed latent space representation which can be paired with an attention based decoder to generate fluent sentences.

Adversarial Text Text Generation +2

Know What You Don't Need: Single-Shot Meta-Pruning for Attention Heads

no code implementations7 Nov 2020 Zhengyan Zhang, Fanchao Qi, Zhiyuan Liu, Qun Liu, Maosong Sun

To measure the informativeness of attention heads, we train our Single-Shot Meta-Pruner (SMP) with a meta-learning paradigm aiming to maintain the distribution of text representations after pruning.

Informativeness Meta-Learning +1

The Box is in the Pen: Evaluating Commonsense Reasoning in Neural Machine Translation

1 code implementation Findings of the Association for Computational Linguistics 2020 Jie He, Tao Wang, Deyi Xiong, Qun Liu

Our experiments and analyses demonstrate that neural machine translation performs poorly on commonsense reasoning of the three ambiguity types in terms of both reasoning accuracy ( 6 60. 1{\%}) and reasoning consistency (6 31{\%}).

Common Sense Reasoning Machine Translation +1

SparTerm: Learning Term-based Sparse Representation for Fast Text Retrieval

no code implementations2 Oct 2020 Yang Bai, Xiaoguang Li, Gang Wang, Chaoliang Zhang, Lifeng Shang, Jun Xu, Zhaowei Wang, Fangshan Wang, Qun Liu

Term-based sparse representations dominate the first-stage text retrieval in industrial applications, due to its advantage in efficiency, interpretability, and exact term matching.

Language Modelling Retrieval +1

TernaryBERT: Distillation-aware Ultra-low Bit BERT

2 code implementations EMNLP 2020 Wei Zhang, Lu Hou, Yichun Yin, Lifeng Shang, Xiao Chen, Xin Jiang, Qun Liu

Transformer-based pre-training models like BERT have achieved remarkable performance in many natural language processing tasks. However, these models are both computation and memory expensive, hindering their deployment to resource-constrained devices.

Knowledge Distillation Quantization

TensorCoder: Dimension-Wise Attention via Tensor Representation for Natural Language Modeling

no code implementations28 Jul 2020 Shuai Zhang, Peng Zhang, Xindian Ma, Junqiu Wei, Ningning Wang, Qun Liu

Transformer has been widely-used in many Natural Language Processing (NLP) tasks and the scaled dot-product attention between tokens is a core module of Transformer.

Language Modelling Machine Translation +2

Learning to Detect Unacceptable Machine Translations for Downstream Tasks

no code implementations8 May 2020 Meng Zhang, Xin Jiang, Yang Liu, Qun Liu

In this work, we put machine translation in a cross-lingual pipeline and introduce downstream tasks to define task-specific acceptability of machine translations.

Machine Translation Translation

Accurate Word Alignment Induction from Neural Machine Translation

no code implementations EMNLP 2020 Yun Chen, Yang Liu, Guanhua Chen, Xin Jiang, Qun Liu

Shift-Att is an interpretation method that induces alignments from the attention weights of Transformer and does not require parameter update or architecture change.

Machine Translation Multi-Task Learning +2

Perturbed Masking: Parameter-free Probing for Analyzing and Interpreting BERT

1 code implementation ACL 2020 Zhiyong Wu, Yun Chen, Ben Kao, Qun Liu

However, this approach of evaluating a language model is undermined by the uncertainty of the amount of knowledge that is learned by the probe itself.

Dependency Parsing Language Modelling +1

DynaBERT: Dynamic BERT with Adaptive Width and Depth

3 code implementations NeurIPS 2020 Lu Hou, Zhiqi Huang, Lifeng Shang, Xin Jiang, Xiao Chen, Qun Liu

The pre-trained language models like BERT, though powerful in many natural language processing tasks, are both computation and memory expensive.

Language Modelling

Dictionary-based Data Augmentation for Cross-Domain Neural Machine Translation

no code implementations6 Apr 2020 Wei Peng, Chongxuan Huang, Tian-Hao Li, Yun Chen, Qun Liu

Existing data augmentation approaches for neural machine translation (NMT) have predominantly relied on back-translating in-domain (IND) monolingual corpora.

Data Augmentation Machine Translation +2

Context-Aware Design of Cyber-Physical Human Systems (CPHS)

no code implementations7 Jan 2020 Supratik Mukhopadhyay, Qun Liu, Edward Collier, Yimin Zhu, Ravindra Gudishala, Chanachok Chokwitthaya, Robert DiBiano, Alimire Nabijiang, Sanaz Saeidi, Subhajit Sidhanta, Arnab Ganguly

The impacts of context factors driving human system interaction are challenging and are difficult to capture and replicate in existing design models.

Decision Making

Multi-channel Reverse Dictionary Model

1 code implementation18 Dec 2019 Lei Zhang, Fanchao Qi, Zhiyuan Liu, Yasheng Wang, Qun Liu, Maosong Sun

A reverse dictionary takes the description of a target word as input and outputs the target word together with other words that match the description.

Reverse Dictionary

Learning to Predict Explainable Plots for Neural Story Generation

no code implementations5 Dec 2019 Gang Chen, Yang Liu, Huanbo Luan, Meng Zhang, Qun Liu, Maosong Sun

While the use of neural networks has proven effective in improving story generation, how to learn to generate an explainable high-level plot still remains a major challenge.

Story Generation

Integrating Graph Contextualized Knowledge into Pre-trained Language Models

no code implementations30 Nov 2019 Bin He, Di Zhou, Jinghui Xiao, Xin Jiang, Qun Liu, Nicholas Jing Yuan, Tong Xu

Complex node interactions are common in knowledge graphs, and these interactions also contain rich knowledge information.

Knowledge Graphs Representation Learning

Deep-seismic-prior-based reconstruction of seismic data using convolutional neural networks

no code implementations20 Nov 2019 Qun Liu, Lihua Fu, Meng Zhang

Synthetic and field data were tested to assess the performance of the proposed algorithm (DSPRecon algorithm); the advantages of using our method were evaluated by comparing it with the singular spectrum analysis (SSA) method for irregular data reconstruction and de-aliased Cadzow method for regular data reconstruction.

Zero-Shot Paraphrase Generation with Multilingual Language Models

no code implementations9 Nov 2019 Yinpeng Guo, Yi Liao, Xin Jiang, Qing Zhang, Yibo Zhang, Qun Liu

Leveraging multilingual parallel texts to automatically generate paraphrases has drawn much attention as size of high-quality paraphrase corpus is limited.

Denoising Machine Translation +2

A General Framework for Adaptation of Neural Machine Translation to Simultaneous Translation

no code implementations Asian Chapter of the Association for Computational Linguistics 2020 Yun Chen, Liangyou Li, Xin Jiang, Xiao Chen, Qun Liu

Despite the success of neural machine translation (NMT), simultaneous neural machine translation (SNMT), the task of translating in real time before a full sentence has been observed, remains challenging due to the syntactic structure difference and simultaneity requirements.

Machine Translation NMT +1

Pretrained Language Models for Document-Level Neural Machine Translation

no code implementations8 Nov 2019 Liangyou Li, Xin Jiang, Qun Liu

Previous work on document-level NMT usually focuses on limited contexts because of degraded performance on larger contexts.

Machine Translation NMT +2

Word-level Textual Adversarial Attacking as Combinatorial Optimization

1 code implementation ACL 2020 Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, Maosong Sun

Also, further experiments show our model has higher transferability and can bring more robustness enhancement to victim models by adversarial training.

Adversarial Attack Combinatorial Optimization +3

Improving Sequence Modeling Ability of Recurrent Neural Networks via Sememes

1 code implementation20 Oct 2019 Yujia Qin, Fanchao Qi, Sicong Ouyang, Zhiyuan Liu, Cheng Yang, Yasheng Wang, Qun Liu, Maosong Sun

Sememes, the minimum semantic units of human languages, have been successfully utilized in various natural language processing applications.

Adversarial Attack Language Modelling +2

TinyBERT: Distilling BERT for Natural Language Understanding

6 code implementations Findings of the Association for Computational Linguistics 2020 Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, Qun Liu

To accelerate inference and reduce model size while maintaining accuracy, we first propose a novel Transformer distillation method that is specially designed for knowledge distillation (KD) of the Transformer-based models.

Knowledge Distillation Language Modelling +6

NEZHA: Neural Contextualized Representation for Chinese Language Understanding

6 code implementations31 Aug 2019 Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen, Qun Liu

The pre-trained language models have achieved great successes in various natural language understanding (NLU) tasks due to its capacity to capture the deep contextualized information in text by pre-training on large-scale corpora.

named-entity-recognition Natural Language Inference +3

Dialog State Tracking with Reinforced Data Augmentation

no code implementations21 Aug 2019 Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Qun Liu

Neural dialog state trackers are generally limited due to the lack of quantity and diversity of annotated training data.

Data Augmentation dialog state tracking

Huawei's NMT Systems for the WMT 2019 Biomedical Translation Task

no code implementations WS 2019 Wei Peng, Jianfeng Liu, Liangyou Li, Qun Liu

This paper describes Huawei{'}s neural machine translation systems for the WMT 2019 biomedical translation shared task.

Domain Adaptation Machine Translation +2

Modeling Semantic Compositionality with Sememe Knowledge

1 code implementation ACL 2019 Fanchao Qi, Jun-Jie Huang, Chenghao Yang, Zhiyuan Liu, Xiao Chen, Qun Liu, Maosong Sun

In this paper, we verify the effectiveness of sememes, the minimum semantic units of human languages, in modeling SC by a confirmatory experiment.

multi-word expression embedding multi-word expression sememe prediction

GPT-based Generation for Classical Chinese Poetry

1 code implementation29 Jun 2019 Yi Liao, Yasheng Wang, Qun Liu, Xin Jiang

We present a simple yet effective method for generating high quality classical Chinese poetry with Generative Pre-trained Language Model (GPT).

Language Modelling

Decomposable Neural Paraphrase Generation

no code implementations ACL 2019 Zichao Li, Xin Jiang, Lifeng Shang, Qun Liu

Paraphrasing exists at different granularity levels, such as lexical level, phrasal level and sentential level.

Paraphrase Generation Unsupervised Domain Adaptation

Bridging the Gap between Training and Inference for Neural Machine Translation

no code implementations ACL 2019 Wen Zhang, Yang Feng, Fandong Meng, Di You, Qun Liu

Neural Machine Translation (NMT) generates target words sequentially in the way of predicting the next word conditioned on the context words.

Machine Translation NMT +1

ERNIE: Enhanced Language Representation with Informative Entities

1 code implementation ACL 2019 Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, Qun Liu

Neural language representation models such as BERT pre-trained on large-scale corpora can well capture rich semantic patterns from plain text, and be fine-tuned to consistently improve the performance of various NLP tasks.

Entity Linking Entity Typing +5

Bilingual-GAN: A Step Towards Parallel Text Generation

no code implementations WS 2019 Ahmad Rashid, Alan Do-Omri, Md. Akmal Haidar, Qun Liu, Mehdi Rezagholizadeh

Latent space based GAN methods and attention based sequence to sequence models have achieved impressive results in text generation and unsupervised machine translation respectively.

Denoising Text Generation +2

Improving Domain Adaptation Translation with Domain Invariant and Specific Information

no code implementations NAACL 2019 Shuhao Gu, Yang Feng, Qun Liu

Besides, we add a discriminator to the shared encoder and employ adversarial training for the whole model to reinforce the performance of information separation and machine translation simultaneously.

Domain Adaptation Machine Translation +1

Improving the Robustness of Speech Translation

no code implementations2 Nov 2018 Xiang Li, Haiyang Xue, Wei Chen, Yang Liu, Yang Feng, Qun Liu

Although neural machine translation (NMT) has achieved impressive progress recently, it is usually trained on the clean parallel data set and hence cannot work well when the input sentence is the production of the automatic speech recognition (ASR) system due to the enormous errors in the source.

Automatic Speech Recognition Machine Translation +3

Learning to Jointly Translate and Predict Dropped Pronouns with a Shared Reconstruction Mechanism

no code implementations EMNLP 2018 Long-Yue Wang, Zhaopeng Tu, Andy Way, Qun Liu

Pronouns are frequently omitted in pro-drop languages, such as Chinese, generally leading to significant challenges with respect to the production of complete translations.

Machine Translation Translation

Tailoring Neural Architectures for Translating from Morphologically Rich Languages

no code implementations COLING 2018 Peyman Passban, Andy Way, Qun Liu

A morphologically complex word (MCW) is a hierarchical constituent with meaning-preserving subunits, so word-based models which rely on surface forms might not be powerful enough to translate such structures.

Machine Translation NMT +1

Knowledge Diffusion for Neural Dialogue Generation

1 code implementation ACL 2018 Shuman Liu, Hongshen Chen, Zhaochun Ren, Yang Feng, Qun Liu, Dawei Yin

Our empirical study on a real-world dataset prove that our model is capable of generating meaningful, diverse and natural responses for both factoid-questions and knowledge grounded chi-chats.

Dialogue Generation Question Answering +1

Multimodal Neural Machine Translation for Low-resource Language Pairs using Synthetic Data

no code implementations WS 2018 Koel Dutta Chowdhury, Mohammed Hasanuzzaman, Qun Liu

In this paper, we investigate the effectiveness of training a multimodal neural machine translation (MNMT) system with image features for a low-resource language pair, Hindi and English, using synthetic data.

Machine Translation Question Answering +3

Understanding Meanings in Multilingual Customer Feedback

no code implementations5 Jun 2018 Chao-Hong Liu, Declan Groves, Akira Hayakawa, Alberto Poncelas, Qun Liu

Understanding and being able to react to customer feedback is the most fundamental task in providing good customer service.

General Classification

Refining Source Representations with Relation Networks for Neural Machine Translation

no code implementations COLING 2018 Wen Zhang, Jiawei Hu, Yang Feng, Qun Liu

Although neural machine translation with the encoder-decoder framework has achieved great success recently, it still suffers drawbacks of forgetting distant information, which is an inherent disadvantage of recurrent neural network structure, and disregarding relationship between source words during encoding step.

Machine Translation Memorization +1

SafeRNet: Safe Transportation Routing in the era of Internet of Vehicles and Mobile Crowd Sensing

no code implementations3 May 2018 Qun Liu, Suman Kumar, Vijay Mago

This paper proposes SafeRNet, a safe route computation framework which takes advantage of these technologies to analyze streaming traffic data and historical data to effectively infer safe routes and deliver them back to users in real time.

Unsupervised Learning using Pretrained CNN and Associative Memory Bank

no code implementations2 May 2018 Qun Liu, Supratik Mukhopadhyay

In this paper, we present a new architecture and an approach for unsupervised object recognition that addresses the above mentioned problem with fine tuning associated with pretrained CNN-based supervised deep learning approaches while allowing automated feature extraction.

Few-Shot Image Classification Fine-Grained Image Classification +2

Translating Pro-Drop Languages with Reconstruction Models

1 code implementation10 Jan 2018 Long-Yue Wang, Zhaopeng Tu, Shuming Shi, Tong Zhang, Yvette Graham, Qun Liu

Next, the annotated source sentence is reconstructed from hidden representations in the NMT model.

Machine Translation NMT +1

Semantics-Enhanced Task-Oriented Dialogue Translation: A Case Study on Hotel Booking

no code implementations IJCNLP 2017 Long-Yue Wang, Jinhua Du, Liangyou Li, Zhaopeng Tu, Andy Way, Qun Liu

We showcase TODAY, a semantics-enhanced task-oriented dialogue translation system, whose novelties are: (i) task-oriented named entity (NE) definition and a hybrid strategy for NE recognition and translation; and (ii) a novel grounded semantic method for dialogue understanding and task-order management.

Dialogue Understanding Machine Translation +3

CASICT Tibetan Word Segmentation System for MLWS2017

1 code implementation17 Oct 2017 Jiawei Hu, Qun Liu

We participated in the MLWS 2017 on Tibetan word segmentation task, our system is trained in a unrestricted way, by introducing a baseline system and 76w tibetan segmented sentences of ours.

Refining Source Representations with Relation Networks for Neural Machine Translation

no code implementations12 Sep 2017 Wen Zhang, Jiawei Hu, Yang Feng, Qun Liu

Although neural machine translation (NMT) with the encoder-decoder framework has achieved great success in recent times, it still suffers from some drawbacks: RNNs tend to forget old information which is often useful and the encoder only operates through words without considering word relationship.

Machine Translation NMT +1

Information-Propogation-Enhanced Neural Machine Translation by Relation Model

no code implementations6 Sep 2017 Wen Zhang, Jiawei Hu, Yang Feng, Qun Liu

Even though sequence-to-sequence neural machine translation (NMT) model have achieved state-of-art performance in the recent fewer years, but it is widely concerned that the recurrent neural network (RNN) units are very hard to capture the long-distance state information, which means RNN can hardly find the feature with long term dependency as the sequence becomes longer.

Machine Translation NMT +1