1 code implementation • 30 Oct 2020 • Yunqi Cai, Lantian Li, Dong Wang, Andrew Abel
In this paper, we argue that this problem is largely attributed to the maximum-likelihood (ML) training criterion of the DNF model, which aims to maximize the likelihood of the observations but not necessarily improve the Gaussianality of the latent codes.
1 code implementation • 7 Apr 2020 • Yunqi Cai, Lantian Li, Dong Wang, Andrew Abel
Deep speaker embedding has demonstrated state-of-the-art performance in speaker recognition tasks.
no code implementations • EMNLP 2017 • Yang Feng, Shiyue Zhang, Andi Zhang, Dong Wang, Andrew Abel
Neural machine translation (NMT) has achieved notable success in recent times, however it is also widely recognized that this approach has limitations with handling infrequent words and word pairs.
no code implementations • ACL 2017 • Jiyuan Zhang, Yang Feng, Dong Wang, Yang Wang, Andrew Abel, Shiyue Zhang, Andi Zhang
It has been shown that Chinese poems can be successfully generated by sequence-to-sequence neural models, particularly with the attention mechanism.
no code implementations • 9 May 2017 • Zhiyuan Tang, Dong Wang, Yixiang Chen, Lantian Li, Andrew Abel
Deep neural models, particularly the LSTM-RNN model, have shown great potential for language identification (LID).
no code implementations • 27 Sep 2016 • Lantian Li, Zhiyuan Tang, Dong Wang, Andrew Abel, Yang Feng, Shiyue Zhang
This paper presents a unified model to perform language and speaker recognition simultaneously and altogether.
no code implementations • 10 May 2015 • Miao Fan, Qiang Zhou, Andrew Abel, Thomas Fang Zheng, Ralph Grishman
This paper contributes a novel embedding model which measures the probability of each belief $\langle h, r, t, m\rangle$ in a large-scale knowledge repository via simultaneously learning distributed representations for entities ($h$ and $t$), relations ($r$), and the words in relation mentions ($m$).