Search Results for author: Baskaran Sankaran

Found 12 papers, 0 papers with code

Attention-based Vocabulary Selection for NMT Decoding

no code implementations12 Jun 2017 Baskaran Sankaran, Markus Freitag, Yaser Al-Onaizan

Usually, the candidate lists are a combination of external word-to-word aligner, phrase table entries or most frequent words.

Machine Translation NMT +2

Ensemble Distillation for Neural Machine Translation

no code implementations6 Feb 2017 Markus Freitag, Yaser Al-Onaizan, Baskaran Sankaran

Knowledge distillation describes a method for training a student network to perform better by learning from a stronger teacher network.

Knowledge Distillation Machine Translation +3

Temporal Attention Model for Neural Machine Translation

no code implementations9 Aug 2016 Baskaran Sankaran, Haitao Mi, Yaser Al-Onaizan, Abe Ittycheriah

Attention-based Neural Machine Translation (NMT) models suffer from attention deficiency issues as has been observed in recent research.

Machine Translation NMT +2

Zero-Resource Translation with Multi-Lingual Neural Machine Translation

no code implementations EMNLP 2016 Orhan Firat, Baskaran Sankaran, Yaser Al-Onaizan, Fatos T. Yarman Vural, Kyunghyun Cho

In this paper, we propose a novel finetuning algorithm for the recently introduced multi-way, mulitlingual neural machine translate that enables zero-resource machine translation.

Machine Translation Translation

Coverage Embedding Models for Neural Machine Translation

no code implementations EMNLP 2016 Haitao Mi, Baskaran Sankaran, Zhiguo Wang, Abe Ittycheriah

In this paper, we enhance the attention-based neural machine translation (NMT) by adding explicit coverage embedding models to alleviate issues of repeating and dropping translations in NMT.

Machine Translation NMT +1

Cannot find the paper you are looking for? You can Submit a new open access paper.