1 code implementation • ACL 2019 • Gon{\c{c}}alo M. Correia, Andr{\'e} F. T. Martins
Automatic post-editing (APE) seeks to automatically refine the output of a black-box machine translation (MT) system through human post-edits.
no code implementations • TACL 2017 • Andr{\'e} F. T. Martins, Marcin Junczys-Dowmunt, Fabio N. Kepler, Ram{\'o}n Astudillo, Chris Hokamp, Roman Grundkiewicz
Translation quality estimation is a task of growing importance in NLP, due to its potential to reduce post-editing human effort in disruptive ways.
no code implementations • EMNLP 2017 • Andr{\'e} F. T. Martins, Julia Kreutzer
Our models compare favourably to BILSTM taggers on three sequence tagging tasks.
no code implementations • WS 2018 • Ben Peters, Vlad Niculae, Andr{\'e} F. T. Martins
Neural network methods are experiencing wide adoption in NLP, thanks to their empirical performance on many tasks.
no code implementations • WS 2018 • Lucia Specia, Fr{\'e}d{\'e}ric Blain, Varvara Logacheva, Ram{\'o}n Astudillo, Andr{\'e} F. T. Martins
We report the results of the WMT18 shared task on Quality Estimation, i. e. the task of predicting the quality of the output of machine translation systems at various granularity levels: word, phrase, sentence and document.
no code implementations • LREC 2014 • Miguel B. Almeida, Mariana S. C. Almeida, Andr{\'e} F. T. Martins, Helena Figueira, Pedro Mendes, Cl{\'a}udia Pinto
In this paper, we introduce the Priberam Compressive Summarization Corpus, a new multi-document summarization corpus for European Portuguese.
no code implementations • ACL 2019 • Andr{\'e} F. T. Martins, Tsvetomila Mihaylova, Nikita Nangia, Vlad Niculae
Latent structure models are a powerful tool for modeling compositional data, discovering linguistic structure, and building NLP pipelines.
no code implementations • WS 2019 • Ben Peters, Andr{\'e} F. T. Martins
This paper presents the Instituto de Telecomunica{\c{c}}{\~o}es{--}Instituto Superior T{\'e}cnico submission to Task 1 of the SIGMORPHON 2019 Shared Task.
no code implementations • WS 2019 • Erick Fonseca, Lisa Yankovskaya, Andr{\'e} F. T. Martins, Mark Fishel, Christian Federmann
We report the results of the WMT19 shared task on Quality Estimation, i. e. the task of predicting the quality of the output of machine translation systems given just the source text and the hypothesis translations.
no code implementations • ACL 2020 • Erick Fonseca, Andr{\'e} F. T. Martins
Neural encoders have allowed dependency parsers to shift from higher-order structured models to simpler first-order ones, making decoding faster and still achieving better accuracy than non-neural parsers.
no code implementations • WS 2020 • Ben Peters, Andr{\'e} F. T. Martins
For both tasks, we present multilingual models, training jointly on data in all languages.