1 code implementation • VarDial (COLING) 2022 • Noëmi Aepli, Antonios Anastasopoulos, Adrian-Gabriel Chifu, William Domingues, Fahim Faisal, Mihaela Gaman, Radu Tudor Ionescu, Yves Scherrer
This report presents the results of the shared tasks organized as part of the VarDial Evaluation Campaign 2022.
no code implementations • 30 Apr 2024 • Zachary William Hopton, Noëmi Aepli
In this study, we fine-tuned a multilingual model with data from several Occitan dialects and conducted a series of experiments to assess the model's representations of these dialects.
no code implementations • 30 Apr 2024 • Eyal Liron Dolev, Clemens Fidel Lutz, Noëmi Aepli
Whisper is a state-of-the-art automatic speech recognition (ASR) model (Radford et al., 2022).
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
1 code implementation • 28 Mar 2024 • Manu Narayanan, Noëmi Aepli
Furthermore, we use this dataset for evaluation purposes in developing our English-Tulu machine translation model.
1 code implementation • 25 Jan 2024 • Jannis Vamvas, Noëmi Aepli, Rico Sennrich
Creating neural text encoders for written Swiss German is challenging due to a dearth of training data combined with dialectal variation.
1 code implementation • 28 Nov 2023 • Noëmi Aepli, Chantal Amrhein, Florian Schottmann, Rico Sennrich
For sensible progress in natural language processing, it is important that we are aware of the limitations of the evaluation metrics we use.
no code implementations • 31 May 2023 • Noëmi Aepli, Çağrı Çöltekin, Rob van der Goot, Tommi Jauhiainen, Mourhaf Kazzaz, Nikola Ljubešić, Kai North, Barbara Plank, Yves Scherrer, Marcos Zampieri
This report presents the results of the shared tasks organized as part of the VarDial Evaluation Campaign 2023.
no code implementations • Findings (ACL) 2022 • Noëmi Aepli, Rico Sennrich
Cross-lingual transfer between a high-resource language and its dialects or closely related language varieties should be facilitated by their similarity.
1 code implementation • NAACL 2021 • Annette Rios, Chantal Amrhein, Noëmi Aepli, Rico Sennrich
Many sequence-to-sequence tasks in natural language processing are roughly monotonic in the alignment between source and target sequence, and previous work has facilitated or enforced learning of monotonic attention behavior via specialized attention functions or pretraining.