Search Results for author: Andrew Abel

Found 7 papers, 2 papers with code

Deep Speaker Vector Normalization with Maximum Gaussianality Training

1 code implementation30 Oct 2020 Yunqi Cai, Lantian Li, Dong Wang, Andrew Abel

In this paper, we argue that this problem is largely attributed to the maximum-likelihood (ML) training criterion of the DNF model, which aims to maximize the likelihood of the observations but not necessarily improve the Gaussianality of the latent codes.

Speaker Recognition

Deep Normalization for Speaker Vectors

1 code implementation7 Apr 2020 Yunqi Cai, Lantian Li, Dong Wang, Andrew Abel

Deep speaker embedding has demonstrated state-of-the-art performance in speaker recognition tasks.

Speaker Recognition

Memory-augmented Neural Machine Translation

no code implementations EMNLP 2017 Yang Feng, Shiyue Zhang, Andi Zhang, Dong Wang, Andrew Abel

Neural machine translation (NMT) has achieved notable success in recent times, however it is also widely recognized that this approach has limitations with handling infrequent words and word pairs.

Machine Translation NMT +1

Flexible and Creative Chinese Poetry Generation Using Neural Memory

no code implementations ACL 2017 Jiyuan Zhang, Yang Feng, Dong Wang, Yang Wang, Andrew Abel, Shiyue Zhang, Andi Zhang

It has been shown that Chinese poems can be successfully generated by sequence-to-sequence neural models, particularly with the attention mechanism.

Phonetic Temporal Neural Model for Language Identification

no code implementations9 May 2017 Zhiyuan Tang, Dong Wang, Yixiang Chen, Lantian Li, Andrew Abel

Deep neural models, particularly the LSTM-RNN model, have shown great potential for language identification (LID).

Language Identification

Collaborative Learning for Language and Speaker Recognition

no code implementations27 Sep 2016 Lantian Li, Zhiyuan Tang, Dong Wang, Andrew Abel, Yang Feng, Shiyue Zhang

This paper presents a unified model to perform language and speaker recognition simultaneously and altogether.

Speaker Recognition

Probabilistic Belief Embedding for Knowledge Base Completion

no code implementations10 May 2015 Miao Fan, Qiang Zhou, Andrew Abel, Thomas Fang Zheng, Ralph Grishman

This paper contributes a novel embedding model which measures the probability of each belief $\langle h, r, t, m\rangle$ in a large-scale knowledge repository via simultaneously learning distributed representations for entities ($h$ and $t$), relations ($r$), and the words in relation mentions ($m$).

Knowledge Base Completion Relation

Cannot find the paper you are looking for? You can Submit a new open access paper.