Search Results for author: Long-Yue Wang

Found 33 papers, 5 papers with code

How Does Selective Mechanism Improve Self-Attention Networks?

1 code implementation ACL 2020 Xinwei Geng, Long-Yue Wang, Xing Wang, Bing Qin, Ting Liu, Zhaopeng Tu

Self-attention networks (SANs) with selective mechanism has produced substantial improvements in various NLP tasks by concentrating on a subset of input words.

Machine Translation Natural Language Inference +2

Self-Attention with Cross-Lingual Position Representation

no code implementations ACL 2020 Liang Ding, Long-Yue Wang, DaCheng Tao

Position encoding (PE), an essential part of self-attention networks (SANs), is used to preserve the word order information for natural language processing tasks, generating fixed position indices for input sequences.

Machine Translation Position +2

Go From the General to the Particular: Multi-Domain Translation with Domain Transformation Networks

2 code implementations22 Nov 2019 Yong Wang, Long-Yue Wang, Shuming Shi, Victor O. K. Li, Zhaopeng Tu

The key challenge of multi-domain translation lies in simultaneously encoding both the general knowledge shared across domains and the particular knowledge distinctive to each domain in a unified model.

General Knowledge Knowledge Distillation +3

Towards Understanding Neural Machine Translation with Word Importance

no code implementations IJCNLP 2019 Shilin He, Zhaopeng Tu, Xing Wang, Long-Yue Wang, Michael R. Lyu, Shuming Shi

Although neural machine translation (NMT) has advanced the state-of-the-art on various language pairs, the interpretability of NMT remains unsatisfactory.

Machine Translation NMT +1

Self-Attention with Structural Position Representations

no code implementations IJCNLP 2019 Xing Wang, Zhaopeng Tu, Long-Yue Wang, Shuming Shi

Although self-attention networks (SANs) have advanced the state-of-the-art on various NLP tasks, one criticism of SANs is their ability of encoding positions of input words (Shaw et al., 2018).

Position Sentence +1

Exploiting Sentential Context for Neural Machine Translation

no code implementations ACL 2019 Xing Wang, Zhaopeng Tu, Long-Yue Wang, Shuming Shi

In this work, we present novel approaches to exploit sentential context for neural machine translation (NMT).

Machine Translation NMT +1

Assessing the Ability of Self-Attention Networks to Learn Word Order

1 code implementation ACL 2019 Baosong Yang, Long-Yue Wang, Derek F. Wong, Lidia S. Chao, Zhaopeng Tu

Self-attention networks (SAN) have attracted a lot of interests due to their high parallelization and strong performance on a variety of NLP tasks, e. g. machine translation.

Machine Translation Position +1

Modeling Recurrence for Transformer

no code implementations NAACL 2019 Jie Hao, Xing Wang, Baosong Yang, Long-Yue Wang, Jinfeng Zhang, Zhaopeng Tu

In addition to the standard recurrent neural network, we introduce a novel attentive recurrent network to leverage the strengths of both attention and recurrent networks.

Machine Translation Translation

Convolutional Self-Attention Networks

no code implementations NAACL 2019 Baosong Yang, Long-Yue Wang, Derek Wong, Lidia S. Chao, Zhaopeng Tu

Self-attention networks (SANs) have drawn increasing interest due to their high parallelization in computation and flexibility in modeling dependencies.

Machine Translation Translation

Dynamic Layer Aggregation for Neural Machine Translation with Routing-by-Agreement

no code implementations15 Feb 2019 Zi-Yi Dou, Zhaopeng Tu, Xing Wang, Long-Yue Wang, Shuming Shi, Tong Zhang

With the promising progress of deep neural networks, layer aggregation has been used to fuse information across layers in various fields, such as computer vision and machine translation.

Machine Translation Translation

Learning to Refine Source Representations for Neural Machine Translation

no code implementations26 Dec 2018 Xinwei Geng, Long-Yue Wang, Xing Wang, Bing Qin, Ting Liu, Zhaopeng Tu

Neural machine translation (NMT) models generally adopt an encoder-decoder architecture for modeling the entire translation process.

Machine Translation NMT +2

Convolutional Self-Attention Network

no code implementations31 Oct 2018 Baosong Yang, Long-Yue Wang, Derek F. Wong, Lidia S. Chao, Zhaopeng Tu

Self-attention network (SAN) has recently attracted increasing interest due to its fully parallelized computation and flexibility in modeling dependencies.

Translation

Learning to Jointly Translate and Predict Dropped Pronouns with a Shared Reconstruction Mechanism

no code implementations EMNLP 2018 Long-Yue Wang, Zhaopeng Tu, Andy Way, Qun Liu

Pronouns are frequently omitted in pro-drop languages, such as Chinese, generally leading to significant challenges with respect to the production of complete translations.

Machine Translation Translation

Domain Adaptation for Statistical Machine Translation

no code implementations5 Apr 2018 Long-Yue Wang

Our goal is to explore domain adaptation approaches and techniques for improving the translation quality of domain-specific SMT systems.

Domain Adaptation Machine Translation +1

Chinese-Portuguese Machine Translation: A Study on Building Parallel Corpora from Comparable Texts

no code implementations LREC 2018 Siyou Liu, Long-Yue Wang, Chao-Hong Liu

The approach we used in this paper also shows a good example on how to boost performance of MT systems for low-resource language pairs.

Machine Translation NMT +1

Translating Pro-Drop Languages with Reconstruction Models

1 code implementation10 Jan 2018 Long-Yue Wang, Zhaopeng Tu, Shuming Shi, Tong Zhang, Yvette Graham, Qun Liu

Next, the annotated source sentence is reconstructed from hidden representations in the NMT model.

Machine Translation NMT +2

Semantics-Enhanced Task-Oriented Dialogue Translation: A Case Study on Hotel Booking

no code implementations IJCNLP 2017 Long-Yue Wang, Jinhua Du, Liangyou Li, Zhaopeng Tu, Andy Way, Qun Liu

We showcase TODAY, a semantics-enhanced task-oriented dialogue translation system, whose novelties are: (i) task-oriented named entity (NE) definition and a hybrid strategy for NE recognition and translation; and (ii) a novel grounded semantic method for dialogue understanding and task-order management.

Dialogue Understanding Machine Translation +3

Automatic Construction of Discourse Corpora for Dialogue Translation

no code implementations LREC 2016 Long-Yue Wang, Xiaojun Zhang, Zhaopeng Tu, Andy Way, Qun Liu

Then tags such as speaker and discourse boundary from the script data are projected to its subtitle data via an information retrieval approach in order to map monolingual discourse to bilingual texts.

Information Retrieval Language Modelling +3

A Novel Approach to Dropped Pronoun Translation

no code implementations NAACL 2016 Long-Yue Wang, Zhaopeng Tu, Xiaojun Zhang, Hang Li, Andy Way, Qun Liu

Finally, we integrate the above outputs into our translation system to recall missing pronouns by both extracting rules from the DP-labelled training data and translating the DP-generated input sentences.

Machine Translation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.