Search Results for author: Andr{\'e} F. T. Martins

Found 25 papers, 1 papers with code

A Simple and Effective Approach to Automatic Post-Editing with Transfer Learning

1 code implementation ACL 2019 Gon{\c{c}}alo M. Correia, Andr{\'e} F. T. Martins

Automatic post-editing (APE) seeks to automatically refine the output of a black-box machine translation (MT) system through human post-edits.

Automatic Post-Editing Transfer Learning +1

Interpretable Structure Induction via Sparse Attention

no code implementations WS 2018 Ben Peters, Vlad Niculae, Andr{\'e} F. T. Martins

Neural network methods are experiencing wide adoption in NLP, thanks to their empirical performance on many tasks.

Findings of the WMT 2018 Shared Task on Quality Estimation

no code implementations WS 2018 Lucia Specia, Fr{\'e}d{\'e}ric Blain, Varvara Logacheva, Ram{\'o}n Astudillo, Andr{\'e} F. T. Martins

We report the results of the WMT18 shared task on Quality Estimation, i. e. the task of predicting the quality of the output of machine translation systems at various granularity levels: word, phrase, sentence and document.

Machine Translation Sentence +1

Latent Structure Models for Natural Language Processing

no code implementations ACL 2019 Andr{\'e} F. T. Martins, Tsvetomila Mihaylova, Nikita Nangia, Vlad Niculae

Latent structure models are a powerful tool for modeling compositional data, discovering linguistic structure, and building NLP pipelines.

Language Modelling Machine Translation +4

IT--IST at the SIGMORPHON 2019 Shared Task: Sparse Two-headed Models for Inflection

no code implementations WS 2019 Ben Peters, Andr{\'e} F. T. Martins

This paper presents the Instituto de Telecomunica{\c{c}}{\~o}es{--}Instituto Superior T{\'e}cnico submission to Task 1 of the SIGMORPHON 2019 Shared Task.

LEMMA

Findings of the WMT 2019 Shared Tasks on Quality Estimation

no code implementations WS 2019 Erick Fonseca, Lisa Yankovskaya, Andr{\'e} F. T. Martins, Mark Fishel, Christian Federmann

We report the results of the WMT19 shared task on Quality Estimation, i. e. the task of predicting the quality of the output of machine translation systems given just the source text and the hypothesis translations.

Machine Translation Sentence +1

Revisiting Higher-Order Dependency Parsers

no code implementations ACL 2020 Erick Fonseca, Andr{\'e} F. T. Martins

Neural encoders have allowed dependency parsers to shift from higher-order structured models to simpler first-order ones, making decoding faster and still achieving better accuracy than non-neural parsers.

Sentence

One-Size-Fits-All Multilingual Models

no code implementations WS 2020 Ben Peters, Andr{\'e} F. T. Martins

For both tasks, we present multilingual models, training jointly on data in all languages.

LEMMA

Cannot find the paper you are looking for? You can Submit a new open access paper.