Search Results for author: Zhaopeng Tu

Found 120 papers, 51 papers with code

Tencent AI Lab Machine Translation Systems for the WMT20 Biomedical Translation Task

1 code implementation WMT (EMNLP) 2020 Xing Wang, Zhaopeng Tu, Longyue Wang, Shuming Shi

This paper describes the Tencent AI Lab submission of the WMT2020 shared task on biomedical translation in four language directions: German<->English, English<->German, Chinese<->English and English<->Chinese.

Machine Translation Translation

Tencent AI Lab Machine Translation Systems for the WMT21 Biomedical Translation Task

no code implementations WMT (EMNLP) 2021 Xing Wang, Zhaopeng Tu, Shuming Shi

This paper describes the Tencent AI Lab submission of the WMT2021 shared task on biomedical translation in eight language directions: English-German, English-French, English-Spanish and English-Russian.

Machine Translation Translation

Tencent Translation System for the WMT21 News Translation Task

no code implementations WMT (EMNLP) 2021 Longyue Wang, Mu Li, Fangxu Liu, Shuming Shi, Zhaopeng Tu, Xing Wang, Shuangzhi Wu, Jiali Zeng, Wen Zhang

Based on our success in the last WMT, we continuously employed advanced techniques such as large batch training, data selection and data filtering.

Data Augmentation Translation

RAST: Domain-Robust Dialogue Rewriting as Sequence Tagging

no code implementations EMNLP 2021 Jie Hao, Linfeng Song, LiWei Wang, Kun Xu, Zhaopeng Tu, Dong Yu

The task of dialogue rewriting aims to reconstruct the latest dialogue utterance by copying the missing content from the dialogue context.

Dialogue Rewriting Text Generation

Redistributing Low-Frequency Words: Making the Most of Monolingual Data in Non-Autoregressive Translation

1 code implementation ACL 2022 Liang Ding, Longyue Wang, Shuming Shi, DaCheng Tao, Zhaopeng Tu

In this work, we provide an appealing alternative for NAT – monolingual KD, which trains NAT student on external monolingual data with AT teacher trained on the original bilingual data.

Knowledge Distillation Translation +1

How Far Are We on the Decision-Making of LLMs? Evaluating LLMs' Gaming Ability in Multi-Agent Environments

no code implementations18 Mar 2024 Jen-tse Huang, Eric John Li, Man Ho Lam, Tian Liang, Wenxuan Wang, Youliang Yuan, Wenxiang Jiao, Xing Wang, Zhaopeng Tu, Michael R. Lyu

Additionally, we conduct evaluations across various LLMs and find that GPT-4 outperforms other models on GAMA-Bench, achieving a score of 72. 5.

Can Watermarks Survive Translation? On the Cross-lingual Consistency of Text Watermark for Large Language Models

no code implementations21 Feb 2024 Zhiwei He, Binglin Zhou, Hongkun Hao, Aiwei Liu, Xing Wang, Zhaopeng Tu, Zhuosheng Zhang, Rui Wang

Furthermore, we analyze two key factors that contribute to the cross-lingual consistency in text watermarking and propose a defense method that increases the AUC from 0. 67 to 0. 88 under CWRA.

TAG

VN Network: Embedding Newly Emerging Entities with Virtual Neighbors

no code implementations21 Feb 2024 Yongquan He, Zihan Wang, Peng Zhang, Zhaopeng Tu, Zhaochun Ren

To address this issue, recent works apply the graph neural network on the existing neighbors of the unseen entities.

Knowledge Graph Completion Network Embedding

Unsupervised Sign Language Translation and Generation

no code implementations12 Feb 2024 Zhengsheng Guo, Zhiwei He, Wenxiang Jiao, Xing Wang, Rui Wang, Kehai Chen, Zhaopeng Tu, Yong Xu, Min Zhang

Motivated by the success of unsupervised neural machine translation (UNMT), we introduce an unsupervised sign language translation and generation network (USLNet), which learns from abundant single-modality (text and video) data without parallel sign language data.

Machine Translation Sign Language Translation +1

Revisiting the Markov Property for Machine Translation

no code implementations3 Feb 2024 Cunxiao Du, Hao Zhou, Zhaopeng Tu, Jing Jiang

In this paper, we re-examine the Markov property in the context of neural machine translation.

Machine Translation Translation

GliDe with a CaPE: A Low-Hassle Method to Accelerate Speculative Decoding

no code implementations3 Feb 2024 Cunxiao Du, Jing Jiang, Xu Yuanchen, Jiawei Wu, Sicheng Yu, Yongqi Li, Shenggui Li, Kai Xu, Liqiang Nie, Zhaopeng Tu, Yang You

Speculative decoding is a relatively new decoding framework that leverages small and efficient draft models to reduce the latency of LLMs.

Benchmarking LLMs via Uncertainty Quantification

1 code implementation23 Jan 2024 Fanghua Ye, Mingming Yang, Jianhui Pang, Longyue Wang, Derek F. Wong, Emine Yilmaz, Shuming Shi, Zhaopeng Tu

The proliferation of open-source Large Language Models (LLMs) from various institutions has highlighted the urgent need for comprehensive evaluation methods.

Benchmarking Uncertainty Quantification

Improving Machine Translation with Human Feedback: An Exploration of Quality Estimation as a Reward Model

1 code implementation23 Jan 2024 Zhiwei He, Xing Wang, Wenxiang Jiao, Zhuosheng Zhang, Rui Wang, Shuming Shi, Zhaopeng Tu

In this work, we investigate the potential of employing the QE model as the reward model (the QE-based reward model) to predict human preferences for feedback training.

Machine Translation Translation

Salute the Classic: Revisiting Challenges of Machine Translation in the Age of Large Language Models

1 code implementation16 Jan 2024 Jianhui Pang, Fanghua Ye, Longyue Wang, Dian Yu, Derek F. Wong, Shuming Shi, Zhaopeng Tu

This study revisits these challenges, offering insights into their ongoing relevance in the context of advanced Large Language Models (LLMs): domain mismatch, amount of parallel data, rare word prediction, translation of long sentences, attention model as word alignment, and sub-optimal beam search.

Machine Translation NMT +2

The Earth is Flat? Unveiling Factual Errors in Large Language Models

no code implementations1 Jan 2024 Wenxuan Wang, Juluan Shi, Zhaopeng Tu, Youliang Yuan, Jen-tse Huang, Wenxiang Jiao, Michael R. Lyu

Current methods for evaluating LLMs' veracity are limited by test data leakage or the need for extensive human labor, hindering efficient and accurate error detection.

In-Context Learning Multiple-choice

GPT4Video: A Unified Multimodal Large Language Model for lnstruction-Followed Understanding and Safety-Aware Generation

no code implementations25 Nov 2023 Zhanyu Wang, Longyue Wang, Zhen Zhao, Minghao Wu, Chenyang Lyu, Huayang Li, Deng Cai, Luping Zhou, Shuming Shi, Zhaopeng Tu

While the recent advances in Multimodal Large Language Models (MLLMs) constitute a significant leap forward in the field, these models are predominantly confined to the realm of input-side multimodal comprehension, lacking the capacity for multimodal content generation.

Instruction Following Language Modelling +7

Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models

1 code implementation31 Oct 2023 Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang

Ideally, an advanced agent should possess the ability to accurately describe a given word using an aggressive description while concurrently maximizing confusion in the conservative description, enhancing its participation in the game.

A Systematic Evaluation of GPT-4V's Multimodal Capability for Medical Image Analysis

no code implementations31 Oct 2023 Yingshu Li, Yunyi Liu, Zhanyu Wang, Xinyu Liang, Lei Wang, Lingqiao Liu, Leyang Cui, Zhaopeng Tu, Longyue Wang, Luping Zhou

This work conducts an evaluation of GPT-4V's multimodal capability for medical image analysis, with a focus on three representative tasks of radiology report generation, medical visual question answering, and medical visual grounding.

Descriptive Medical Visual Question Answering +3

Not All Countries Celebrate Thanksgiving: On the Cultural Dominance in Large Language Models

no code implementations19 Oct 2023 Wenxuan Wang, Wenxiang Jiao, Jingyuan Huang, Ruyi Dai, Jen-tse Huang, Zhaopeng Tu, Michael R. Lyu

This paper identifies a cultural dominance issue within large language models (LLMs) due to the predominant use of English data in model training (e. g., ChatGPT).

All Languages Matter: On the Multilingual Safety of Large Language Models

1 code implementation2 Oct 2023 Wenxuan Wang, Zhaopeng Tu, Chang Chen, Youliang Yuan, Jen-tse Huang, Wenxiang Jiao, Michael R. Lyu

In this work, we build the first multilingual safety benchmark for LLMs, XSafety, in response to the global deployment of LLMs in practice.

Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench

1 code implementation2 Oct 2023 Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu

Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education.

Benchmarking

GPT-4 Is Too Smart To Be Safe: Stealthy Chat with LLMs via Cipher

1 code implementation12 Aug 2023 Youliang Yuan, Wenxiang Jiao, Wenxuan Wang, Jen-tse Huang, Pinjia He, Shuming Shi, Zhaopeng Tu

We propose a novel framework CipherChat to systematically examine the generalizability of safety alignment to non-natural languages -- ciphers.

Ethics

Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench

1 code implementation7 Aug 2023 Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu

Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse.

Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling

no code implementations16 Jul 2023 Longyue Wang, Zefeng Du, Donghuai Liu, Deng Cai, Dian Yu, Haiyun Jiang, Yan Wang, Leyang Cui, Shuming Shi, Zhaopeng Tu

Modeling discourse -- the linguistic phenomena that go beyond individual sentences, is a fundamental yet challenging aspect of natural language processing (NLP).

Language Modelling Sentence

On the Cultural Gap in Text-to-Image Generation

no code implementations6 Jul 2023 Bingshuai Liu, Longyue Wang, Chenyang Lyu, Yong Zhang, Jinsong Su, Shuming Shi, Zhaopeng Tu

Accordingly, we propose a novel multi-modal metric that considers object-text alignment to filter the fine-tuning data in the target culture, which is used to fine-tune a T2I model to improve cross-cultural generation.

Text-to-Image Generation

Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration

1 code implementation15 Jun 2023 Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, Zhaopeng Tu

Although instruction-tuned large language models (LLMs) have exhibited remarkable capabilities across various NLP tasks, their effectiveness on other data modalities beyond text has not been fully studied.

Language Modelling

Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate

1 code implementation30 May 2023 Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi

To address the DoT problem, we propose a Multi-Agent Debate (MAD) framework, in which multiple agents express their arguments in the state of "tit for tat" and a judge manages the debate process to obtain a final solution.

Arithmetic Reasoning Machine Translation

Revisiting Non-Autoregressive Translation at Scale

1 code implementation25 May 2023 Zhihao Wang, Longyue Wang, Jinsong Su, Junfeng Yao, Zhaopeng Tu

Experimental results on the large-scale WMT20 En-De show that the asymmetric architecture (e. g. bigger encoder and smaller decoder) can achieve comparable performance with the scaling model, while maintaining the superiority of decoding speed with standard NAT models.

Translation

A Survey on Zero Pronoun Translation

no code implementations17 May 2023 Longyue Wang, Siyou Liu, Mingzhou Xu, Linfeng Song, Shuming Shi, Zhaopeng Tu

Zero pronouns (ZPs) are frequently omitted in pro-drop languages (e. g. Chinese, Hungarian, and Hindi), but should be recalled in non-pro-drop languages (e. g. English).

Language Modelling Large Language Model +2

Exploring Human-Like Translation Strategy with Large Language Models

2 code implementations6 May 2023 Zhiwei He, Tian Liang, Wenxiang Jiao, Zhuosheng Zhang, Yujiu Yang, Rui Wang, Zhaopeng Tu, Shuming Shi, Xing Wang

Compared to typical machine translation that focuses solely on source-to-target mapping, LLM-based translation can potentially mimic the human translation process which might take preparatory steps to ensure high-quality translation.

Hallucination Machine Translation +2

Document-Level Machine Translation with Large Language Models

1 code implementation5 Apr 2023 Longyue Wang, Chenyang Lyu, Tianbo Ji, Zhirui Zhang, Dian Yu, Shuming Shi, Zhaopeng Tu

Large language models (LLMs) such as ChatGPT can produce coherent, cohesive, relevant, and fluent answers for various natural language processing (NLP) tasks.

Document Level Machine Translation Machine Translation +1

ParroT: Translating during Chat using Large Language Models tuned with Human Translation and Feedback

1 code implementation5 Apr 2023 Wenxiang Jiao, Jen-tse Huang, Wenxuan Wang, Zhiwei He, Tian Liang, Xing Wang, Shuming Shi, Zhaopeng Tu

Therefore, we propose ParroT, a framework to enhance and regulate the translation abilities during chat based on open-source LLMs (e. g., LLaMA), human-written translation and feedback data.

Instruction Following Machine Translation +1

Search-Engine-augmented Dialogue Response Generation with Cheaply Supervised Query Production

1 code implementation16 Feb 2023 Ante Wang, Linfeng Song, Qi Liu, Haitao Mi, Longyue Wang, Zhaopeng Tu, Jinsong Su, Dong Yu

We propose a dialogue model that can access the vast and dynamic information from any search engine for response generation.

Chatbot Response Generation

Is ChatGPT A Good Translator? Yes With GPT-4 As The Engine

1 code implementation20 Jan 2023 Wenxiang Jiao, Wenxuan Wang, Jen-tse Huang, Xing Wang, Shuming Shi, Zhaopeng Tu

By evaluating on a number of benchmark test sets, we find that ChatGPT performs competitively with commercial translation products (e. g., Google Translate) on high-resource European languages but lags behind significantly on low-resource or distant languages.

Machine Translation Sentence +1

Adapters for Enhanced Modeling of Multilingual Knowledge and Text

1 code implementation24 Oct 2022 Yifan Hou, Wenxiang Jiao, Meizhen Liu, Carl Allen, Zhaopeng Tu, Mrinmaya Sachan

Specifically, we introduce a lightweight adapter set to enhance MLLMs with cross-lingual entity alignment and facts from MLKGs for many languages.

Entity Alignment

Tencent's Multilingual Machine Translation System for WMT22 Large-Scale African Languages

1 code implementation18 Oct 2022 Wenxiang Jiao, Zhaopeng Tu, Jiarui Li, Wenxuan Wang, Jen-tse Huang, Shuming Shi

This paper describes Tencent's multilingual machine translation systems for the WMT22 shared task on Large-Scale Machine Translation Evaluation for African Languages.

Data Augmentation Machine Translation +1

Scaling Back-Translation with Domain Text Generation for Sign Language Gloss Translation

1 code implementation13 Oct 2022 Jinhui Ye, Wenxiang Jiao, Xing Wang, Zhaopeng Tu

In this paper, to overcome the limitation, we propose a Prompt based domain text Generation (PGEN) approach to produce the large-scale in-domain spoken language text data.

Language Modelling Text Generation +1

ngram-OAXE: Phrase-Based Order-Agnostic Cross Entropy for Non-Autoregressive Machine Translation

no code implementations COLING 2022 Cunxiao Du, Zhaopeng Tu, Longyue Wang, Jing Jiang

Recently, a new training oaxe loss has proven effective to ameliorate the effect of multimodality for non-autoregressive translation (NAT), which removes the penalty of word order errors in the standard cross-entropy loss.

Machine Translation Sentence +1

A Template-based Method for Constrained Neural Machine Translation

1 code implementation23 May 2022 Shuo Wang, Peng Li, Zhixing Tan, Zhaopeng Tu, Maosong Sun, Yang Liu

In this work, we propose a template-based method that can yield results with high translation quality and match accuracy and the inference speed of our method is comparable with unconstrained NMT models.

Machine Translation NMT +1

Understanding and Mitigating the Uncertainty in Zero-Shot Translation

no code implementations20 May 2022 Wenxuan Wang, Wenxiang Jiao, Shuo Wang, Zhaopeng Tu, Michael R. Lyu

Zero-shot translation is a promising direction for building a comprehensive multilingual neural machine translation (MNMT) system.

Machine Translation Translation

Understanding and Improving Sequence-to-Sequence Pretraining for Neural Machine Translation

no code implementations ACL 2022 Wenxuan Wang, Wenxiang Jiao, Yongchang Hao, Xing Wang, Shuming Shi, Zhaopeng Tu, Michael Lyu

In this paper, we present a substantial step in better understanding the SOTA sequence-to-sequence (Seq2Seq) pretraining for neural machine translation~(NMT).

Machine Translation NMT +1

Bridging the Data Gap between Training and Inference for Unsupervised Neural Machine Translation

1 code implementation ACL 2022 Zhiwei He, Xing Wang, Rui Wang, Shuming Shi, Zhaopeng Tu

By carefully designing experiments, we identify two representative characteristics of the data gap in source: (1) style gap (i. e., translated vs. natural text style) that leads to poor generalization capability; (2) content gap that induces the model to produce hallucination content biased towards the target language.

Hallucination Machine Translation +1

On the Complementarity between Pre-Training and Back-Translation for Neural Machine Translation

1 code implementation Findings (EMNLP) 2021 Xuebo Liu, Longyue Wang, Derek F. Wong, Liang Ding, Lidia S. Chao, Shuming Shi, Zhaopeng Tu

Pre-training (PT) and back-translation (BT) are two simple and powerful methods to utilize monolingual data for improving the model performance of neural machine translation (NMT).

Machine Translation NMT +2

Language Models are Good Translators

no code implementations25 Jun 2021 Shuo Wang, Zhaopeng Tu, Zhixing Tan, Wenxuan Wang, Maosong Sun, Yang Liu

Inspired by the recent progress of large-scale pre-trained language models on machine translation in a limited scenario, we firstly demonstrate that a single language model (LM4MT) can achieve comparable performance with strong encoder-decoder NMT models on standard machine translation benchmarks, using the same training data and similar amount of model parameters.

Language Modelling Machine Translation +2

Order-Agnostic Cross Entropy for Non-Autoregressive Machine Translation

1 code implementation9 Jun 2021 Cunxiao Du, Zhaopeng Tu, Jing Jiang

We propose a new training objective named order-agnostic cross entropy (OaXE) for fully non-autoregressive translation (NAT) models.

Machine Translation Translation

On the Language Coverage Bias for Neural Machine Translation

no code implementations Findings (ACL) 2021 Shuo Wang, Zhaopeng Tu, Zhixing Tan, Shuming Shi, Maosong Sun, Yang Liu

Language coverage bias, which indicates the content-dependent differences between sentence pairs originating from the source and target languages, is important for neural machine translation (NMT) because the target-original training data is not well exploited in current practice.

Data Augmentation Machine Translation +3

Self-Training Sampling with Monolingual Data Uncertainty for Neural Machine Translation

1 code implementation ACL 2021 Wenxiang Jiao, Xing Wang, Zhaopeng Tu, Shuming Shi, Michael R. Lyu, Irwin King

In this work, we propose to improve the sampling procedure by selecting the most informative monolingual sentences to complement the parallel data.

Machine Translation NMT +1

Rejuvenating Low-Frequency Words: Making the Most of Parallel Data in Non-Autoregressive Translation

1 code implementation ACL 2021 Liang Ding, Longyue Wang, Xuebo Liu, Derek F. Wong, DaCheng Tao, Zhaopeng Tu

Results demonstrate that the proposed approach can significantly and universally improve translation quality by reducing translation errors on low-frequency words.

Knowledge Distillation Translation

Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning

1 code implementation ICLR 2021 Xuebo Liu, Longyue Wang, Derek F. Wong, Liang Ding, Lidia S. Chao, Zhaopeng Tu

Encoder layer fusion (EncoderFusion) is a technique to fuse all the encoder layers (instead of the uppermost layer) for sequence-to-sequence (Seq2Seq) models, which has proven effective on various NLP tasks.

Grammatical Error Correction Machine Translation +3

Understanding and Improving Lexical Choice in Non-Autoregressive Translation

no code implementations ICLR 2021 Liang Ding, Longyue Wang, Xuebo Liu, Derek F. Wong, DaCheng Tao, Zhaopeng Tu

To this end, we introduce an extra Kullback-Leibler divergence term derived by comparing the lexical choice of NAT model and that embedded in the raw data.

Knowledge Distillation Translation

Robust Dialogue Utterance Rewriting as Sequence Tagging

1 code implementation29 Dec 2020 Jie Hao, Linfeng Song, LiWei Wang, Kun Xu, Zhaopeng Tu, Dong Yu

The task of dialogue rewriting aims to reconstruct the latest dialogue utterance by copying the missing content from the dialogue context.

Dialogue Rewriting Text Generation

EmpDG: Multi-resolution Interactive Empathetic Dialogue Generation

1 code implementation COLING 2020 Qintong Li, Hongshen Chen, Zhaochun Ren, Pengjie Ren, Zhaopeng Tu, Zhumin Chen

In response to this problem, we propose a multi-resolution adversarial model {--} EmpDG, to generate more empathetic responses.

Dialogue Generation

Rethinking the Value of Transformer Components

no code implementations COLING 2020 Wenxuan Wang, Zhaopeng Tu

Transformer becomes the state-of-the-art translation model, while it is not well studied how each intermediate component contributes to the model performance, which poses significant challenges for designing optimal architectures.

Translation

Context-Aware Cross-Attention for Non-Autoregressive Translation

1 code implementation COLING 2020 Liang Ding, Longyue Wang, Di wu, DaCheng Tao, Zhaopeng Tu

Non-autoregressive translation (NAT) significantly accelerates the inference process by predicting the entire target sequence.

Translation

Multi-Task Learning with Shared Encoder for Non-Autoregressive Machine Translation

1 code implementation NAACL 2021 Yongchang Hao, Shilin He, Wenxiang Jiao, Zhaopeng Tu, Michael Lyu, Xing Wang

In addition, experimental results demonstrate that our Multi-Task NAT is complementary to knowledge distillation, the standard knowledge transfer method for NAT.

Knowledge Distillation Machine Translation +2

Data Rejuvenation: Exploiting Inactive Training Examples for Neural Machine Translation

1 code implementation EMNLP 2020 Wenxiang Jiao, Xing Wang, Shilin He, Irwin King, Michael R. Lyu, Zhaopeng Tu

First, we train an identification model on the original training data, and use it to distinguish inactive examples and active examples by their sentence-level output probabilities.

Machine Translation NMT +2

On the Sub-Layer Functionalities of Transformer Decoder

no code implementations Findings of the Association for Computational Linguistics 2020 Yilin Yang, Longyue Wang, Shuming Shi, Prasad Tadepalli, Stefan Lee, Zhaopeng Tu

There have been significant efforts to interpret the encoder of Transformer-based encoder-decoder architectures for neural machine translation (NMT); meanwhile, the decoder remains largely unexamined despite its critical role.

Machine Translation NMT +1

On the Sparsity of Neural Machine Translation Models

no code implementations EMNLP 2020 Yong Wang, Longyue Wang, Victor O. K. Li, Zhaopeng Tu

Modern neural machine translation (NMT) models employ a large number of parameters, which leads to serious over-parameterization and typically causes the underutilization of computational resources.

Machine Translation NMT +1

How Does Selective Mechanism Improve Self-Attention Networks?

1 code implementation ACL 2020 Xinwei Geng, Long-Yue Wang, Xing Wang, Bing Qin, Ting Liu, Zhaopeng Tu

Self-attention networks (SANs) with selective mechanism has produced substantial improvements in various NLP tasks by concentrating on a subset of input words.

Machine Translation Natural Language Inference +2

On the Inference Calibration of Neural Machine Translation

1 code implementation ACL 2020 Shuo Wang, Zhaopeng Tu, Shuming Shi, Yang Liu

Confidence calibration, which aims to make model predictions equal to the true correctness measures, is important for neural machine translation (NMT) because it is able to offer useful indicators of translation errors in the generated output.

Machine Translation NMT +1

Assessing the Bilingual Knowledge Learned by Neural Machine Translation Models

no code implementations28 Apr 2020 Shilin He, Xing Wang, Shuming Shi, Michael R. Lyu, Zhaopeng Tu

In this paper, we bridge the gap by assessing the bilingual knowledge learned by NMT models with phrase table -- an interpretable table of bilingual lexicons.

Machine Translation NMT +1

Go From the General to the Particular: Multi-Domain Translation with Domain Transformation Networks

2 code implementations22 Nov 2019 Yong Wang, Long-Yue Wang, Shuming Shi, Victor O. K. Li, Zhaopeng Tu

The key challenge of multi-domain translation lies in simultaneously encoding both the general knowledge shared across domains and the particular knowledge distinctive to each domain in a unified model.

General Knowledge Knowledge Distillation +3

Neuron Interaction Based Representation Composition for Neural Machine Translation

no code implementations22 Nov 2019 Jian Li, Xing Wang, Baosong Yang, Shuming Shi, Michael R. Lyu, Zhaopeng Tu

Starting from this intuition, we propose a novel approach to compose representations learned by different components in neural machine translation (e. g., multi-layer networks or multi-head attention), based on modeling strong interactions among neurons in the representation vectors.

Machine Translation Translation

EmpDG: Multiresolution Interactive Empathetic Dialogue Generation

1 code implementation20 Nov 2019 Qintong Li, Hongshen Chen, Zhaochun Ren, Pengjie Ren, Zhaopeng Tu, Zhumin Chen

In response to this problem, we propose a multi-resolution adversarial model -- EmpDG, to generate more empathetic responses.

Dialogue Generation

Retrieval-guided Dialogue Response Generation via a Matching-to-Generation Framework

no code implementations IJCNLP 2019 Deng Cai, Yan Wang, Wei Bi, Zhaopeng Tu, Xiaojiang Liu, Shuming Shi

End-to-end sequence generation is a popular technique for developing open domain dialogue systems, though they suffer from the \textit{safe response problem}.

Response Generation Retrieval

Multi-Granularity Self-Attention for Neural Machine Translation

no code implementations IJCNLP 2019 Jie Hao, Xing Wang, Shuming Shi, Jinfeng Zhang, Zhaopeng Tu

Current state-of-the-art neural machine translation (NMT) uses a deep multi-head self-attention network with no explicit phrase information.

Machine Translation NMT +1

Towards Better Modeling Hierarchical Structure for Self-Attention with Ordered Neurons

no code implementations IJCNLP 2019 Jie Hao, Xing Wang, Shuming Shi, Jinfeng Zhang, Zhaopeng Tu

Recent studies have shown that a hybrid of self-attention networks (SANs) and recurrent neural networks (RNNs) outperforms both individual architectures, while not much is known about why the hybrid models work.

Inductive Bias Machine Translation +1

Towards Understanding Neural Machine Translation with Word Importance

no code implementations IJCNLP 2019 Shilin He, Zhaopeng Tu, Xing Wang, Long-Yue Wang, Michael R. Lyu, Shuming Shi

Although neural machine translation (NMT) has advanced the state-of-the-art on various language pairs, the interpretability of NMT remains unsatisfactory.

Machine Translation NMT +1

Self-Attention with Structural Position Representations

no code implementations IJCNLP 2019 Xing Wang, Zhaopeng Tu, Long-Yue Wang, Shuming Shi

Although self-attention networks (SANs) have advanced the state-of-the-art on various NLP tasks, one criticism of SANs is their ability of encoding positions of input words (Shaw et al., 2018).

Position Sentence +1

Exploiting Sentential Context for Neural Machine Translation

no code implementations ACL 2019 Xing Wang, Zhaopeng Tu, Long-Yue Wang, Shuming Shi

In this work, we present novel approaches to exploit sentential context for neural machine translation (NMT).

Machine Translation NMT +1

Assessing the Ability of Self-Attention Networks to Learn Word Order

1 code implementation ACL 2019 Baosong Yang, Long-Yue Wang, Derek F. Wong, Lidia S. Chao, Zhaopeng Tu

Self-attention networks (SAN) have attracted a lot of interests due to their high parallelization and strong performance on a variety of NLP tasks, e. g. machine translation.

Machine Translation Position +1

Dynamic Past and Future for Neural Machine Translation

1 code implementation IJCNLP 2019 Zaixiang Zheng, Shu-Jian Huang, Zhaopeng Tu, Xin-yu Dai, Jia-Jun Chen

Previous studies have shown that neural machine translation (NMT) models can benefit from explicitly modeling translated (Past) and untranslated (Future) to groups of translated and untranslated contents through parts-to-wholes assignment.

Machine Translation NMT +1

Modeling Recurrence for Transformer

no code implementations NAACL 2019 Jie Hao, Xing Wang, Baosong Yang, Long-Yue Wang, Jinfeng Zhang, Zhaopeng Tu

In addition to the standard recurrent neural network, we introduce a novel attentive recurrent network to leverage the strengths of both attention and recurrent networks.

Machine Translation Translation

Convolutional Self-Attention Networks

no code implementations NAACL 2019 Baosong Yang, Long-Yue Wang, Derek Wong, Lidia S. Chao, Zhaopeng Tu

Self-attention networks (SANs) have drawn increasing interest due to their high parallelization in computation and flexibility in modeling dependencies.

Machine Translation Translation

Information Aggregation for Multi-Head Attention with Routing-by-Agreement

no code implementations NAACL 2019 Jian Li, Baosong Yang, Zi-Yi Dou, Xing Wang, Michael R. Lyu, Zhaopeng Tu

Multi-head attention is appealing for its ability to jointly extract different types of information from multiple representation subspaces.

Machine Translation Translation

Context-Aware Self-Attention Networks

no code implementations15 Feb 2019 Baosong Yang, Jian Li, Derek Wong, Lidia S. Chao, Xing Wang, Zhaopeng Tu

Self-attention model have shown its flexibility in parallel computation and the effectiveness on modeling both long- and short-term dependencies.

Translation

Dynamic Layer Aggregation for Neural Machine Translation with Routing-by-Agreement

no code implementations15 Feb 2019 Zi-Yi Dou, Zhaopeng Tu, Xing Wang, Long-Yue Wang, Shuming Shi, Tong Zhang

With the promising progress of deep neural networks, layer aggregation has been used to fuse information across layers in various fields, such as computer vision and machine translation.

Machine Translation Translation

Learning to Refine Source Representations for Neural Machine Translation

no code implementations26 Dec 2018 Xinwei Geng, Long-Yue Wang, Xing Wang, Bing Qin, Ting Liu, Zhaopeng Tu

Neural machine translation (NMT) models generally adopt an encoder-decoder architecture for modeling the entire translation process.

Machine Translation NMT +2

Neural Machine Translation with Adequacy-Oriented Learning

no code implementations21 Nov 2018 Xiang Kong, Zhaopeng Tu, Shuming Shi, Eduard Hovy, Tong Zhang

Although Neural Machine Translation (NMT) models have advanced state-of-the-art performance in machine translation, they face problems like the inadequate translation.

Attribute Machine Translation +3

Convolutional Self-Attention Network

no code implementations31 Oct 2018 Baosong Yang, Long-Yue Wang, Derek F. Wong, Lidia S. Chao, Zhaopeng Tu

Self-attention network (SAN) has recently attracted increasing interest due to its fully parallelized computation and flexibility in modeling dependencies.

Translation

Exploiting Deep Representations for Neural Machine Translation

no code implementations EMNLP 2018 Zi-Yi Dou, Zhaopeng Tu, Xing Wang, Shuming Shi, Tong Zhang

Advanced neural machine translation (NMT) models generally implement encoder and decoder as multiple layers, which allows systems to model complex functions and capture complicated linguistic structures.

Machine Translation NMT +1

Multi-Head Attention with Disagreement Regularization

no code implementations EMNLP 2018 Jian Li, Zhaopeng Tu, Baosong Yang, Michael R. Lyu, Tong Zhang

Multi-head attention is appealing for the ability to jointly attend to information from different representation subspaces at different positions.

Translation

Learning to Jointly Translate and Predict Dropped Pronouns with a Shared Reconstruction Mechanism

no code implementations EMNLP 2018 Long-Yue Wang, Zhaopeng Tu, Andy Way, Qun Liu

Pronouns are frequently omitted in pro-drop languages, such as Chinese, generally leading to significant challenges with respect to the production of complete translations.

Machine Translation Translation

Skeleton-to-Response: Dialogue Generation Guided by Retrieval Memory

1 code implementation NAACL 2019 Deng Cai, Yan Wang, Victoria Bi, Zhaopeng Tu, Xiaojiang Liu, Wai Lam, Shuming Shi

Such models rely on insufficient information for generating a specific response since a certain query could be answered in multiple ways.

Dialogue Generation Information Retrieval +3

Neural Machine Translation with Key-Value Memory-Augmented Attention

no code implementations29 Jun 2018 Fandong Meng, Zhaopeng Tu, Yong Cheng, Haiyang Wu, Junjie Zhai, Yuekui Yang, Di Wang

Although attention-based Neural Machine Translation (NMT) has achieved remarkable progress in recent years, it still suffers from issues of repeating and dropping translations.

Machine Translation NMT +2

Target Foresight Based Attention for Neural Machine Translation

no code implementations NAACL 2018 Xintong Li, Lemao Liu, Zhaopeng Tu, Shuming Shi, Max Meng

In neural machine translation, an attention model is used to identify the aligned source words for a target word (target foresight word) in order to select translation context, but it does not make use of any information of this target foresight word at all.

Language Modelling Machine Translation +1

Towards Robust Neural Machine Translation

no code implementations ACL 2018 Yong Cheng, Zhaopeng Tu, Fandong Meng, Junjie Zhai, Yang Liu

Small perturbations in the input can severely distort intermediate representations and thus impact translation quality of neural machine translation (NMT) models.

Machine Translation NMT +1

Generative Stock Question Answering

no code implementations21 Apr 2018 Zhaopeng Tu, Yong Jiang, Xiaojiang Liu, Lei Shu, Shuming Shi

We study the problem of stock related question answering (StockQA): automatically generating answers to stock related questions, just like professional stock analysts providing action recommendations to stocks upon user's requests.

Question Answering Retrieval

Translating Pro-Drop Languages with Reconstruction Models

1 code implementation10 Jan 2018 Long-Yue Wang, Zhaopeng Tu, Shuming Shi, Tong Zhang, Yvette Graham, Qun Liu

Next, the annotated source sentence is reconstructed from hidden representations in the NMT model.

Machine Translation NMT +2

Modeling Past and Future for Neural Machine Translation

1 code implementation TACL 2018 Zaixiang Zheng, Hao Zhou, Shu-Jian Huang, Lili Mou, Xin-yu Dai, Jia-Jun Chen, Zhaopeng Tu

The Past and Future contents are fed to both the attention model and the decoder states, which offers NMT systems the knowledge of translated and untranslated contents.

Machine Translation NMT +1

Learning to Remember Translation History with a Continuous Cache

1 code implementation TACL 2018 Zhaopeng Tu, Yang Liu, Shuming Shi, Tong Zhang

Existing neural machine translation (NMT) models generally translate sentences in isolation, missing the opportunity to take advantage of document-level information.

Machine Translation NMT +1

Semantics-Enhanced Task-Oriented Dialogue Translation: A Case Study on Hotel Booking

no code implementations IJCNLP 2017 Long-Yue Wang, Jinhua Du, Liangyou Li, Zhaopeng Tu, Andy Way, Qun Liu

We showcase TODAY, a semantics-enhanced task-oriented dialogue translation system, whose novelties are: (i) task-oriented named entity (NE) definition and a hybrid strategy for NE recognition and translation; and (ii) a novel grounded semantic method for dialogue understanding and task-order management.

Dialogue Understanding Machine Translation +3

Chunk-Based Bi-Scale Decoder for Neural Machine Translation

1 code implementation ACL 2017 Hao Zhou, Zhaopeng Tu, Shu-Jian Huang, Xiaohua Liu, Hang Li, Jia-Jun Chen

In typical neural machine translation~(NMT), the decoder generates a sentence word by word, packing all linguistic granularities in the same time-scale of RNN.

Machine Translation NMT +2

Modeling Source Syntax for Neural Machine Translation

no code implementations ACL 2017 Junhui Li, Deyi Xiong, Zhaopeng Tu, Muhua Zhu, Min Zhang, Guodong Zhou

Even though a linguistics-free sequence to sequence model in neural machine translation (NMT) has certain capability of implicitly learning syntactic information of source sentences, this paper shows that source syntax can be explicitly incorporated into NMT effectively to provide further improvements.

Machine Translation NMT +1

Neural Machine Translation with Reconstruction

1 code implementation7 Nov 2016 Zhaopeng Tu, Yang Liu, Lifeng Shang, Xiaohua Liu, Hang Li

Although end-to-end Neural Machine Translation (NMT) has achieved remarkable progress in the past two years, it suffers from a major drawback: translations generated by NMT systems often lack of adequacy.

Machine Translation NMT +2

Neural Machine Translation Advised by Statistical Machine Translation

no code implementations17 Oct 2016 Xing Wang, Zhengdong Lu, Zhaopeng Tu, Hang Li, Deyi Xiong, Min Zhang

Neural Machine Translation (NMT) is a new approach to machine translation that has made great progress in recent years.

Machine Translation NMT +1

Context Gates for Neural Machine Translation

2 code implementations TACL 2017 Zhaopeng Tu, Yang Liu, Zhengdong Lu, Xiaohua Liu, Hang Li

In neural machine translation (NMT), generation of a target word depends on both source and target contexts.

Machine Translation NMT +1

Automatic Construction of Discourse Corpora for Dialogue Translation

no code implementations LREC 2016 Long-Yue Wang, Xiaojun Zhang, Zhaopeng Tu, Andy Way, Qun Liu

Then tags such as speaker and discourse boundary from the script data are projected to its subtitle data via an information retrieval approach in order to map monolingual discourse to bilingual texts.

Information Retrieval Language Modelling +3

A Novel Approach to Dropped Pronoun Translation

no code implementations NAACL 2016 Long-Yue Wang, Zhaopeng Tu, Xiaojun Zhang, Hang Li, Andy Way, Qun Liu

Finally, we integrate the above outputs into our translation system to recall missing pronouns by both extracting rules from the DP-labelled training data and translating the DP-generated input sentences.

Machine Translation Translation

Modeling Coverage for Neural Machine Translation

3 code implementations ACL 2016 Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, Hang Li

Attention mechanism has enhanced state-of-the-art Neural Machine Translation (NMT) by jointly learning to align and translate.

Machine Translation NMT +1

A Deep Memory-based Architecture for Sequence-to-Sequence Learning

no code implementations22 Jun 2015 Fandong Meng, Zhengdong Lu, Zhaopeng Tu, Hang Li, Qun Liu

We propose DEEPMEMORY, a novel deep architecture for sequence-to-sequence learning, which performs the task through a series of nonlinear transformations from the representation of the input sequence (e. g., a Chinese sentence) to the final output sequence (e. g., translation to English).

Machine Translation Sentence +1

Context-Dependent Translation Selection Using Convolutional Neural Network

no code implementations IJCNLP 2015 Zhaopeng Tu, Baotian Hu, Zhengdong Lu, Hang Li

We propose a novel method for translation selection in statistical machine translation, in which a convolutional neural network is employed to judge the similarity between a phrase pair in two languages.

Machine Translation Semantic Similarity +3

Cannot find the paper you are looking for? You can Submit a new open access paper.