no code implementations • NAACL (SIGMORPHON) 2022 • Ben Peters, Andre F. T. Martins
This paper presents DeepSPIN’s submissions to the SIGMORPHON 2022 Shared Task on Morpheme Segmentation.
Ranked #1 on Morpheme Segmentaiton on UniMorph 4.0
no code implementations • 6 Mar 2024 • Ben Peters, André F. T. Martins
Neural machine translation (MT) models achieve strong results across a variety of settings, but it is widely believed that they are highly sensitive to "noisy" inputs, such as spelling errors, abbreviations, and other formatting issues.
1 code implementation • 27 Feb 2024 • Duarte M. Alves, José Pombal, Nuno M. Guerreiro, Pedro H. Martins, João Alves, Amin Farajian, Ben Peters, Ricardo Rei, Patrick Fernandes, Sweta Agrawal, Pierre Colombo, José G. C. de Souza, André F. T. Martins
While general-purpose large language models (LLMs) demonstrate proficiency on multiple tasks within the domain of translation, approaches based on open LLMs are competitive only when specializing on a single task.
1 code implementation • NAACL 2021 • Ben Peters, André F. T. Martins
Current sequence-to-sequence models are trained to minimize cross-entropy and use softmax to compute the locally normalized probabilities over target sequences.
no code implementations • WS 2020 • Ben Peters, Andr{\'e} F. T. Martins
For both tasks, we present multilingual models, training jointly on data in all languages.
no code implementations • WS 2019 • Ben Peters, Andr{\'e} F. T. Martins
This paper presents the Instituto de Telecomunica{\c{c}}{\~o}es{--}Instituto Superior T{\'e}cnico submission to Task 1 of the SIGMORPHON 2019 Shared Task.
1 code implementation • ACL 2019 • Ben Peters, Vlad Niculae, André F. T. Martins
Sequence-to-sequence models are a powerful workhorse of NLP.
no code implementations • WS 2018 • Ben Peters, Vlad Niculae, Andr{\'e} F. T. Martins
Neural network methods are experiencing wide adoption in NLP, thanks to their empirical performance on many tasks.
1 code implementation • WS 2017 • Ben Peters, Jon Dehdari, Josef van Genabith
Grapheme-to-phoneme conversion (g2p) is necessary for text-to-speech and automatic speech recognition systems.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1