Search Results for author: Andr{\'e} F. T. Martins

Found 25 papers, 1 papers with code

Revisiting Higher-Order Dependency Parsers

no code implementations ACL 2020 Erick Fonseca, Andr{\'e} F. T. Martins

Neural encoders have allowed dependency parsers to shift from higher-order structured models to simpler first-order ones, making decoding faster and still achieving better accuracy than non-neural parsers.

One-Size-Fits-All Multilingual Models

no code implementations WS 2020 Ben Peters, Andr{\'e} F. T. Martins

For both tasks, we present multilingual models, training jointly on data in all languages.

IT--IST at the SIGMORPHON 2019 Shared Task: Sparse Two-headed Models for Inflection

no code implementations WS 2019 Ben Peters, Andr{\'e} F. T. Martins

This paper presents the Instituto de Telecomunica{\c{c}}{\~o}es{--}Instituto Superior T{\'e}cnico submission to Task 1 of the SIGMORPHON 2019 Shared Task.

Findings of the WMT 2019 Shared Tasks on Quality Estimation

no code implementations WS 2019 Erick Fonseca, Lisa Yankovskaya, Andr{\'e} F. T. Martins, Mark Fishel, Christian Federmann

We report the results of the WMT19 shared task on Quality Estimation, i. e. the task of predicting the quality of the output of machine translation systems given just the source text and the hypothesis translations.

Machine Translation

Latent Structure Models for Natural Language Processing

no code implementations ACL 2019 Andr{\'e} F. T. Martins, Tsvetomila Mihaylova, Nikita Nangia, Vlad Niculae

Latent structure models are a powerful tool for modeling compositional data, discovering linguistic structure, and building NLP pipelines.

Language Modelling Machine Translation +3

A Simple and Effective Approach to Automatic Post-Editing with Transfer Learning

1 code implementation ACL 2019 Gon{\c{c}}alo M. Correia, Andr{\'e} F. T. Martins

Automatic post-editing (APE) seeks to automatically refine the output of a black-box machine translation (MT) system through human post-edits.

Automatic Post-Editing Transfer Learning

Interpretable Structure Induction via Sparse Attention

no code implementations WS 2018 Ben Peters, Vlad Niculae, Andr{\'e} F. T. Martins

Neural network methods are experiencing wide adoption in NLP, thanks to their empirical performance on many tasks.

Findings of the WMT 2018 Shared Task on Quality Estimation

no code implementations WS 2018 Lucia Specia, Fr{\'e}d{\'e}ric Blain, Varvara Logacheva, Ram{\'o}n Astudillo, Andr{\'e} F. T. Martins

We report the results of the WMT18 shared task on Quality Estimation, i. e. the task of predicting the quality of the output of machine translation systems at various granularity levels: word, phrase, sentence and document.

Machine Translation

Pushing the Limits of Translation Quality Estimation

no code implementations TACL 2017 Andr{\'e} F. T. Martins, Marcin Junczys-Dowmunt, Fabio N. Kepler, Ram{\'o}n Astudillo, Chris Hokamp, Roman Grundkiewicz

Translation quality estimation is a task of growing importance in NLP, due to its potential to reduce post-editing human effort in disruptive ways.

Automatic Post-Editing

Cannot find the paper you are looking for? You can Submit a new open access paper.