no code implementations • MTSummit 2021 • Alexandra Birch, Barry Haddow, Antonio Valerio Miceli Barone, Jindrich Helcl, Jonas Waldendorf, Felipe Sánchez Martínez, Mikel Forcada, Víctor Sánchez Cartagena, Juan Antonio Pérez-Ortiz, Miquel Esplà-Gomis, Wilker Aziz, Lina Murady, Sevi Sariisik, Peggy van der Kreeft, Kay Macquarrie
We find that starting from an existing large model pre-trained on 50languages leads to far better BLEU scores than pretraining on one high-resource language pair with a smaller model.
1 code implementation • WMT (EMNLP) 2021 • Pinzhen Chen, Jindřich Helcl, Ulrich Germann, Laurie Burchell, Nikolay Bogoychev, Antonio Valerio Miceli Barone, Jonas Waldendorf, Alexandra Birch, Kenneth Heafield
This paper presents the University of Edinburgh’s constrained submissions of English-German and English-Hausa systems to the WMT 2021 shared task on news translation.
no code implementations • WMT (EMNLP) 2020 • Rachel Bawden, Alexandra Birch, Radina Dobreva, Arturo Oncevay, Antonio Valerio Miceli Barone, Philip Williams
We describe the University of Edinburgh’s submissions to the WMT20 news translation shared task for the low resource language pair English-Tamil and the mid-resource language pair English-Inuktitut.
no code implementations • CL (ACL) 2022 • Barry Haddow, Rachel Bawden, Antonio Valerio Miceli Barone, Jindřich Helcl, Alexandra Birch
We present a survey covering the state of the art in low-resource machine translation research.
no code implementations • WS 2019 • Alex Birch, ra, Barry Haddow, Ivan Tito, Antonio Valerio Miceli Barone, Rachel Bawden, Felipe S{\'a}nchez-Mart{\'\i}nez, Mikel L. Forcada, Miquel Espl{\`a}-Gomis, V{\'\i}ctor S{\'a}nchez-Cartagena, Juan Antonio P{\'e}rez-Ortiz, Wilker Aziz, Andrew Secker, Peggy van der Kreeft
no code implementations • WS 2019 • Rachel Bawden, Nikolay Bogoychev, Ulrich Germann, Roman Grundkiewicz, Faheem Kirefu, Antonio Valerio Miceli Barone, Alexandra Birch
For all translation directions, we created or used back-translations of monolingual data in the target language as additional synthetic training data.
no code implementations • WS 2018 • Barry Haddow, Nikolay Bogoychev, Denis Emelin, Ulrich Germann, Roman Grundkiewicz, Kenneth Heafield, Antonio Valerio Miceli Barone, Rico Sennrich
The University of Edinburgh made submissions to all 14 language pairs in the news translation task, with strong performances in most pairs.
no code implementations • LREC 2018 • Maximiliana Behnke, Antonio Valerio Miceli Barone, Rico Sennrich, Vilelmini Sosoni, Thanasis Naskos, Eirini Takoulidou, Maria Stasimioti, Menno van Zaanen, Sheila Castilho, Federico Gaspari, Panayota Georgakopoulou, Valia Kordoni, Markus Egg, Katia Lida Kermanidis
no code implementations • WS 2017 • Rico Sennrich, Alexandra Birch, Anna Currey, Ulrich Germann, Barry Haddow, Kenneth Heafield, Antonio Valerio Miceli Barone, Philip Williams
This paper describes the University of Edinburgh's submissions to the WMT17 shared news translation and biomedical translation tasks.
no code implementations • EMNLP 2017 • Antonio Valerio Miceli Barone, Barry Haddow, Ulrich Germann, Rico Sennrich
We investigate techniques for supervised domain adaptation for neural machine translation where an existing model trained on a large out-of-domain dataset is adapted to a small in-domain dataset.
3 code implementations • WS 2017 • Antonio Valerio Miceli Barone, Jindřich Helcl, Rico Sennrich, Barry Haddow, Alexandra Birch
It has been shown that increasing model depth improves the quality of neural machine translation.
6 code implementations • IJCNLP 2017 • Antonio Valerio Miceli Barone, Rico Sennrich
Automated documentation of programming source code and automated code generation from natural language are challenging tasks of both practical and scientific interest.
4 code implementations • EACL 2017 • Rico Sennrich, Orhan Firat, Kyunghyun Cho, Alexandra Birch, Barry Haddow, Julian Hitschler, Marcin Junczys-Dowmunt, Samuel Läubli, Antonio Valerio Miceli Barone, Jozef Mokry, Maria Nădejde
We present Nematus, a toolkit for Neural Machine Translation.
1 code implementation • WS 2016 • Antonio Valerio Miceli Barone
Current approaches to learning vector representations of text that are compatible between different languages usually require some amount of parallel text, aligned at word, sentence or at least document level.
4 code implementations • WS 2018 • Antonio Valerio Miceli Barone
Various common deep learning architectures, such as LSTMs, GRUs, Resnets and Highway Networks, employ state passthrough connections that support training with high feed-forward depth or recurrence over many time steps.