1 code implementation • ACL 2020 • Xinwei Geng, Long-Yue Wang, Xing Wang, Bing Qin, Ting Liu, Zhaopeng Tu
Self-attention networks (SANs) with selective mechanism has produced substantial improvements in various NLP tasks by concentrating on a subset of input words.
no code implementations • ACL 2020 • Liang Ding, Long-Yue Wang, DaCheng Tao
Position encoding (PE), an essential part of self-attention networks (SANs), is used to preserve the word order information for natural language processing tasks, generating fixed position indices for input sequences.
2 code implementations • 22 Nov 2019 • Yong Wang, Long-Yue Wang, Shuming Shi, Victor O. K. Li, Zhaopeng Tu
The key challenge of multi-domain translation lies in simultaneously encoding both the general knowledge shared across domains and the particular knowledge distinctive to each domain in a unified model.
no code implementations • IJCNLP 2019 • Xing Wang, Zhaopeng Tu, Long-Yue Wang, Shuming Shi
Although self-attention networks (SANs) have advanced the state-of-the-art on various NLP tasks, one criticism of SANs is their ability of encoding positions of input words (Shaw et al., 2018).
no code implementations • IJCNLP 2019 • Long-Yue Wang, Zhaopeng Tu, Xing Wang, Shuming Shi
In this paper, we propose a unified and discourse-aware ZP translation approach for neural MT models.
no code implementations • IJCNLP 2019 • Shilin He, Zhaopeng Tu, Xing Wang, Long-Yue Wang, Michael R. Lyu, Shuming Shi
Although neural machine translation (NMT) has advanced the state-of-the-art on various language pairs, the interpretability of NMT remains unsatisfactory.
no code implementations • ACL 2019 • Xing Wang, Zhaopeng Tu, Long-Yue Wang, Shuming Shi
In this work, we present novel approaches to exploit sentential context for neural machine translation (NMT).
1 code implementation • ACL 2019 • Baosong Yang, Long-Yue Wang, Derek F. Wong, Lidia S. Chao, Zhaopeng Tu
Self-attention networks (SAN) have attracted a lot of interests due to their high parallelization and strong performance on a variety of NLP tasks, e. g. machine translation.
no code implementations • NAACL 2019 • Jie Hao, Xing Wang, Baosong Yang, Long-Yue Wang, Jinfeng Zhang, Zhaopeng Tu
In addition to the standard recurrent neural network, we introduce a novel attentive recurrent network to leverage the strengths of both attention and recurrent networks.
no code implementations • NAACL 2019 • Baosong Yang, Long-Yue Wang, Derek Wong, Lidia S. Chao, Zhaopeng Tu
Self-attention networks (SANs) have drawn increasing interest due to their high parallelization in computation and flexibility in modeling dependencies.
no code implementations • 15 Feb 2019 • Zi-Yi Dou, Zhaopeng Tu, Xing Wang, Long-Yue Wang, Shuming Shi, Tong Zhang
With the promising progress of deep neural networks, layer aggregation has been used to fuse information across layers in various fields, such as computer vision and machine translation.
no code implementations • 26 Dec 2018 • Xinwei Geng, Long-Yue Wang, Xing Wang, Bing Qin, Ting Liu, Zhaopeng Tu
Neural machine translation (NMT) models generally adopt an encoder-decoder architecture for modeling the entire translation process.
no code implementations • 31 Oct 2018 • Baosong Yang, Long-Yue Wang, Derek F. Wong, Lidia S. Chao, Zhaopeng Tu
Self-attention network (SAN) has recently attracted increasing interest due to its fully parallelized computation and flexibility in modeling dependencies.
no code implementations • EMNLP 2018 • Long-Yue Wang, Zhaopeng Tu, Andy Way, Qun Liu
Pronouns are frequently omitted in pro-drop languages, such as Chinese, generally leading to significant challenges with respect to the production of complete translations.
no code implementations • LREC 2018 • Siyou Liu, Long-Yue Wang, Chao-Hong Liu
The approach we used in this paper also shows a good example on how to boost performance of MT systems for low-resource language pairs.
no code implementations • 5 Apr 2018 • Long-Yue Wang
Our goal is to explore domain adaptation approaches and techniques for improving the translation quality of domain-specific SMT systems.
1 code implementation • 10 Jan 2018 • Long-Yue Wang, Zhaopeng Tu, Shuming Shi, Tong Zhang, Yvette Graham, Qun Liu
Next, the annotated source sentence is reconstructed from hidden representations in the NMT model.
no code implementations • IJCNLP 2017 • Long-Yue Wang, Jinhua Du, Liangyou Li, Zhaopeng Tu, Andy Way, Qun Liu
We showcase TODAY, a semantics-enhanced task-oriented dialogue translation system, whose novelties are: (i) task-oriented named entity (NE) definition and a hybrid strategy for NE recognition and translation; and (ii) a novel grounded semantic method for dialogue understanding and task-order management.
1 code implementation • EMNLP 2017 • Long-Yue Wang, Zhaopeng Tu, Andy Way, Qun Liu
In translation, considering the document as a whole can help to resolve ambiguities and inconsistencies.
no code implementations • LREC 2016 • Long-Yue Wang, Xiaojun Zhang, Zhaopeng Tu, Andy Way, Qun Liu
Then tags such as speaker and discourse boundary from the script data are projected to its subtitle data via an information retrieval approach in order to map monolingual discourse to bilingual texts.
no code implementations • NAACL 2016 • Long-Yue Wang, Zhaopeng Tu, Xiaojun Zhang, Hang Li, Andy Way, Qun Liu
Finally, we integrate the above outputs into our translation system to recall missing pronouns by both extracting rules from the DP-labelled training data and translating the DP-generated input sentences.
no code implementations • LREC 2014 • Liang Tian, Derek F. Wong, Lidia S. Chao, Paulo Quaresma, Francisco Oliveira, Yi Lu, Shuo Li, Yiming Wang, Long-Yue Wang
This paper describes the acquisition of a large scale and high quality parallel corpora for English and Chinese.