1 code implementation • 2 Nov 2022 • Anna Currey, Maria Nădejde, Raghavendra Pappagari, Mia Mayer, Stanislas Lauly, Xing Niu, Benjamin Hsu, Georgiana Dinu
As generic machine translation (MT) quality has improved, the need for targeted benchmarks that explore fine-grained aspects of quality has increased.
no code implementations • 19 Oct 2022 • Suvodeep Majumder, Stanislas Lauly, Maria Nadejde, Marcello Federico, Georgiana Dinu
This paper addresses the task of contextual translation using multi-segment models.
no code implementations • WS 2020 • Georgiana Dinu, Prashant Mathur, Marcello Federico, Stanislas Lauly, Yaser Al-Onaizan
A variety of natural language tasks require processing of textual data which contains a mix of natural language and formal languages such as mathematical expressions.
no code implementations • WS 2017 • Sebastien Jean, Stanislas Lauly, Orhan Firat, Kyunghyun Cho
In this paper we present our systems for the DiscoMT 2017 cross-lingual pronoun prediction shared task.
no code implementations • 17 Apr 2017 • Sebastien Jean, Stanislas Lauly, Orhan Firat, Kyunghyun Cho
We propose a neural machine translation architecture that models the surrounding text in addition to the source sentence.
no code implementations • 18 Mar 2016 • Stanislas Lauly, Yin Zheng, Alexandre Allauzen, Hugo Larochelle
We present an approach based on feed-forward neural networks for learning the distribution of textual documents.
no code implementations • NeurIPS 2014 • Sarath Chandar A P, Stanislas Lauly, Hugo Larochelle, Mitesh M. Khapra, Balaraman Ravindran, Vikas Raykar, Amrita Saha
Cross-language learning allows us to use training data from one language to build models for a different language.
no code implementations • 8 Jan 2014 • Stanislas Lauly, Alex Boulanger, Hugo Larochelle
Recent work on learning multilingual word representations usually relies on the use of word-level alignements (e. g. infered with the help of GIZA++) between translated sentences, in order to align the word embeddings in different languages.
no code implementations • NeurIPS 2012 • Hugo Larochelle, Stanislas Lauly
We describe a new model for learning meaningful representations of text documents from an unlabeled collection of documents.