Search Results for author: Wenxiang Jiao

Found 10 papers, 7 papers with code

Understanding and Mitigating the Uncertainty in Zero-Shot Translation

no code implementations20 May 2022 Wenxuan Wang, Wenxiang Jiao, Shuo Wang, Zhaopeng Tu, Michael R. Lyu

Zero-shot translation is a promising direction for building a comprehensive multilingual neural machine translation (MNMT) system.

Machine Translation Translation

Understanding and Improving Sequence-to-Sequence Pretraining for Neural Machine Translation

no code implementations ACL 2022 Wenxuan Wang, Wenxiang Jiao, Yongchang Hao, Xing Wang, Shuming Shi, Zhaopeng Tu, Michael Lyu

In this paper, we present a substantial step in better understanding the SOTA sequence-to-sequence (Seq2Seq) pretraining for neural machine translation~(NMT).

Machine Translation Translation

Self-Training Sampling with Monolingual Data Uncertainty for Neural Machine Translation

1 code implementation ACL 2021 Wenxiang Jiao, Xing Wang, Zhaopeng Tu, Shuming Shi, Michael R. Lyu, Irwin King

In this work, we propose to improve the sampling procedure by selecting the most informative monolingual sentences to complement the parallel data.

Machine Translation Translation

Multi-Task Learning with Shared Encoder for Non-Autoregressive Machine Translation

1 code implementation NAACL 2021 Yongchang Hao, Shilin He, Wenxiang Jiao, Zhaopeng Tu, Michael Lyu, Xing Wang

In addition, experimental results demonstrate that our Multi-Task NAT is complementary to knowledge distillation, the standard knowledge transfer method for NAT.

Knowledge Distillation Machine Translation +2

Data Rejuvenation: Exploiting Inactive Training Examples for Neural Machine Translation

1 code implementation EMNLP 2020 Wenxiang Jiao, Xing Wang, Shilin He, Irwin King, Michael R. Lyu, Zhaopeng Tu

First, we train an identification model on the original training data, and use it to distinguish inactive examples and active examples by their sentence-level output probabilities.

Machine Translation Translation

Real-Time Emotion Recognition via Attention Gated Hierarchical Memory Network

1 code implementation20 Nov 2019 Wenxiang Jiao, Michael R. Lyu, Irwin King

We propose an Attention Gated Hierarchical Memory Network (AGHMN) to address the problems of prior work: (1) Commonly used convolutional neural networks (CNNs) for utterance feature extraction are less compatible in the memory modules; (2) Unidirectional gated recurrent units (GRUs) only allow each historical utterance to have context before it, preventing information propagation in the opposite direction; (3) The Soft Attention for summarizing loses the positional and ordering information of memories, regardless of how the memory bank is built.

Emotion Recognition in Conversation

Improving Word Representations: A Sub-sampled Unigram Distribution for Negative Sampling

no code implementations21 Oct 2019 Wenxiang Jiao, Irwin King, Michael R. Lyu

Word2Vec is the most popular model for word representation and has been widely investigated in literature.

Sentence Completion

PT-CoDE: Pre-trained Context-Dependent Encoder for Utterance-level Emotion Recognition

1 code implementation20 Oct 2019 Wenxiang Jiao, Michael R. Lyu, Irwin King

Witnessing the success of transfer learning in natural language process (NLP), we propose to pre-train a context-dependent encoder (CoDE) for ULER by learning from unlabeled conversation data.

Emotion Recognition Text Classification +1

HiGRU: Hierarchical Gated Recurrent Units for Utterance-level Emotion Recognition

1 code implementation NAACL 2019 Wenxiang Jiao, Haiqin Yang, Irwin King, Michael R. Lyu

In this paper, we address three challenges in utterance-level emotion recognition in dialogue systems: (1) the same word can deliver different emotions in different contexts; (2) some emotions are rarely seen in general dialogues; (3) long-range contextual information is hard to be effectively captured.

Emotion Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.