Search Results for author: António V. Lopes

Found 4 papers, 1 papers with code

Unbabel's Submission to the WMT2019 APE Shared Task: BERT-based Encoder-Decoder for Automatic Post-Editing

no code implementations WS 2019 António V. Lopes, M. Amin Farajian, Gonçalo M. Correia, Jonay Trenous, André F. T. Martins

Analogously to dual-encoder architectures we develop a BERT-based encoder-decoder (BED) model in which a single pretrained BERT encoder receives both the source src and machine translation tgt strings.

Automatic Post-Editing Decoder +2

One Wide Feedforward is All You Need

no code implementations4 Sep 2023 Telmo Pessoa Pires, António V. Lopes, Yannick Assogba, Hendra Setiawan

The Transformer architecture has two main non-embedding components: Attention and the Feed Forward Network (FFN).

Decoder Position

Cannot find the paper you are looking for? You can Submit a new open access paper.