Search Results for author: Antonio Valerio Miceli Barone

Found 16 papers, 6 papers with code

The University of Edinburgh’s English-Tamil and English-Inuktitut Submissions to the WMT20 News Translation Task

no code implementations WMT (EMNLP) 2020 Rachel Bawden, Alexandra Birch, Radina Dobreva, Arturo Oncevay, Antonio Valerio Miceli Barone, Philip Williams

We describe the University of Edinburgh’s submissions to the WMT20 news translation shared task for the low resource language pair English-Tamil and the mid-resource language pair English-Inuktitut.

Language Modelling Machine Translation +1

Regularization techniques for fine-tuning in neural machine translation

no code implementations EMNLP 2017 Antonio Valerio Miceli Barone, Barry Haddow, Ulrich Germann, Rico Sennrich

We investigate techniques for supervised domain adaptation for neural machine translation where an existing model trained on a large out-of-domain dataset is adapted to a small in-domain dataset.

Domain Adaptation L2 Regularization +4

A parallel corpus of Python functions and documentation strings for automated code documentation and code generation

6 code implementations IJCNLP 2017 Antonio Valerio Miceli Barone, Rico Sennrich

Automated documentation of programming source code and automated code generation from natural language are challenging tasks of both practical and scientific interest.

Code Generation Data Augmentation +2

Towards cross-lingual distributed representations without parallel text trained with adversarial autoencoders

1 code implementation WS 2016 Antonio Valerio Miceli Barone

Current approaches to learning vector representations of text that are compatible between different languages usually require some amount of parallel text, aligned at word, sentence or at least document level.

Sentence

Low-rank passthrough neural networks

4 code implementations WS 2018 Antonio Valerio Miceli Barone

Various common deep learning architectures, such as LSTMs, GRUs, Resnets and Highway Networks, employ state passthrough connections that support training with high feed-forward depth or recurrence over many time steps.

Language Modelling Permuted-MNIST

Cannot find the paper you are looking for? You can Submit a new open access paper.