no code implementations • 24 Dec 2018 • Cong Duy Vu Hoang, Ioan Calapodescu, Marc Dymetman
In previous works, neural sequence models have been shown to improve significantly if external prior knowledge can be provided, for instance by allowing the model to access the embeddings of explicit features during both training and inference.
no code implementations • ALTA 2018 • Cong Duy Vu Hoang, Gholamreza Haffari, Trevor Cohn
In this work, we investigate whether side information is helpful in neural machine translation (NMT).
no code implementations • EMNLP 2017 • Cong Duy Vu Hoang, Gholamreza Haffari, Trevor Cohn
We propose a novel decoding approach for neural machine translation (NMT) based on continuous optimisation.
no code implementations • 11 Jan 2017 • Cong Duy Vu Hoang, Gholamreza Haffari, Trevor Cohn
We propose a novel decoding approach for neural machine translation (NMT) based on continuous optimisation.
no code implementations • NAACL 2016 • Trevor Cohn, Cong Duy Vu Hoang, Ekaterina Vymolova, Kaisheng Yao, Chris Dyer, Gholamreza Haffari
Neural encoder-decoder models of machine translation have achieved impressive results, rivalling traditional translation models.