Search Results for author: José G. C. de Souza

Found 8 papers, 4 papers with code

Steering Large Language Models for Machine Translation with Finetuning and In-Context Learning

no code implementations20 Oct 2023 Duarte M. Alves, Nuno M. Guerreiro, João Alves, José Pombal, Ricardo Rei, José G. C. de Souza, Pierre Colombo, André F. T. Martins

Experiments on 10 language pairs show that our proposed approach recovers the original few-shot capabilities while keeping the added benefits of finetuning.

Machine Translation Translation

An Empirical Study of Translation Hypothesis Ensembling with Large Language Models

1 code implementation17 Oct 2023 António Farinhas, José G. C. de Souza, André F. T. Martins

Large language models (LLMs) are becoming a one-fits-many solution, but they sometimes hallucinate or produce unreliable output.

Machine Translation Translation

Scaling up COMETKIWI: Unbabel-IST 2023 Submission for the Quality Estimation Shared Task

1 code implementation21 Sep 2023 Ricardo Rei, Nuno M. Guerreiro, José Pombal, Daan van Stigt, Marcos Treviso, Luisa Coheur, José G. C. de Souza, André F. T. Martins

Our team participated on all tasks: sentence- and word-level quality prediction (task 1) and fine-grained error span detection (task 2).

Quality-Aware Decoding for Neural Machine Translation

1 code implementation NAACL 2022 Patrick Fernandes, António Farinhas, Ricardo Rei, José G. C. de Souza, Perez Ogayo, Graham Neubig, André F. T. Martins

Despite the progress in machine translation quality estimation and evaluation in the last years, decoding in neural machine translation (NMT) is mostly oblivious to this and centers around finding the most probable translation according to the model (MAP decoding), approximated with beam search.

Machine Translation NMT +1

Cannot find the paper you are looking for? You can Submit a new open access paper.