no code implementations • NAACL (AmericasNLP) 2021 • Manuel Mager, Arturo Oncevay, Abteen Ebrahimi, John Ortega, Annette Rios, Angela Fan, Ximena Gutierrez-Vasques, Luis Chiruzzo, Gustavo Giménez-Lugo, Ricardo Ramos, Ivan Vladimir Meza Ruiz, Rolando Coto-Solano, Alexis Palmer, Elisabeth Mager-Hois, Vishrav Chaudhary, Graham Neubig, Ngoc Thang Vu, Katharina Kann
This paper presents the results of the 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas.
1 code implementation • RANLP 2021 • Nicolas Spring, Annette Rios, Sarah Ebling
We report on experiments in automatic text simplification (ATS) for German with multiple simplification levels along the Common European Framework of Reference for Languages (CEFR), simplifying standard German into levels A1, A2 and B1.
1 code implementation • EMNLP (newsum) 2021 • Annette Rios, Nicolas Spring, Tannon Kew, Marek Kostrzewa, Andreas Säuberli, Mathias Müller, Sarah Ebling
The task of document-level text simplification is very similar to summarization with the additional difficulty of reducing complexity.
1 code implementation • 6 Mar 2024 • Laura Mascarell, Ribin Chalumattu, Annette Rios
The advent of Large Language Models (LLMs) has led to remarkable progress on a wide range of natural language processing tasks.
no code implementations • 28 Nov 2022 • Mathias Müller, Zifan Jiang, Amit Moryossef, Annette Rios, Sarah Ebling
Automatic sign language processing is gaining popularity in Natural Language Processing (NLP) research (Yin et al., 2021).
no code implementations • 20 Apr 2021 • Amit Moryossef, Ioannis Tsochantaridis, Joe Dinn, Necati Cihan Camgöz, Richard Bowden, Tao Jiang, Annette Rios, Mathias Müller, Sarah Ebling
Basically, skeletal representations generalize over an individual's appearance and background, allowing us to focus on the recognition of motion.
1 code implementation • ACL 2022 • Abteen Ebrahimi, Manuel Mager, Arturo Oncevay, Vishrav Chaudhary, Luis Chiruzzo, Angela Fan, John Ortega, Ricardo Ramos, Annette Rios, Ivan Meza-Ruiz, Gustavo A. Giménez-Lugo, Elisabeth Mager, Graham Neubig, Alexis Palmer, Rolando Coto-Solano, Ngoc Thang Vu, Katharina Kann
Continued pretraining offers improvements, with an average accuracy of 44. 05%.
1 code implementation • NAACL 2021 • Annette Rios, Chantal Amrhein, Noëmi Aepli, Rico Sennrich
Many sequence-to-sequence tasks in natural language processing are roughly monotonic in the alignment between source and target sequence, and previous work has facilitated or enforced learning of monotonic attention behavior via specialized attention functions or pretraining.
no code implementations • 22 Mar 2021 • Julia Kreutzer, Isaac Caswell, Lisa Wang, Ahsan Wahab, Daan van Esch, Nasanbayar Ulzii-Orshikh, Allahsera Tapo, Nishant Subramani, Artem Sokolov, Claytone Sikasote, Monang Setyawan, Supheakmungkol Sarin, Sokhar Samb, Benoît Sagot, Clara Rivera, Annette Rios, Isabel Papadimitriou, Salomey Osei, Pedro Ortiz Suarez, Iroro Orife, Kelechi Ogueji, Andre Niyongabo Rubungo, Toan Q. Nguyen, Mathias Müller, André Müller, Shamsuddeen Hassan Muhammad, Nanda Muhammad, Ayanda Mnyakeni, Jamshidbek Mirzakhalov, Tapiwanashe Matangira, Colin Leong, Nze Lawson, Sneha Kudugunta, Yacine Jernite, Mathias Jenny, Orhan Firat, Bonaventure F. P. Dossou, Sakhile Dlamini, Nisansa de Silva, Sakine Çabuk Ballı, Stella Biderman, Alessia Battisti, Ahmed Baruwa, Ankur Bapna, Pallavi Baljekar, Israel Abebe Azime, Ayodele Awokoya, Duygu Ataman, Orevaoghene Ahia, Oghenefego Ahia, Sweta Agrawal, Mofetoluwa Adeyemi
With the success of large-scale pre-training and multilingual modeling in Natural Language Processing (NLP), recent years have seen a proliferation of large, web-mined text datasets covering hundreds of languages.
1 code implementation • WMT (EMNLP) 2020 • Annette Rios, Mathias Müller, Rico Sennrich
A recent trend in multilingual models is to not train on parallel data between all language pairs, but have a single bridge language, e. g. English.
2 code implementations • AMTA 2020 • Mathias Müller, Annette Rios, Rico Sennrich
Domain robustness---the generalization of models to unseen test domains---is low for both statistical (SMT) and neural machine translation (NMT).
1 code implementation • WS 2018 • Mathias Müller, Annette Rios, Elena Voita, Rico Sennrich
We show that, while gains in BLEU are moderate for those systems, they outperform baselines by a large margin in terms of accuracy on our contrastive test set.
no code implementations • WS 2018 • Annette Rios, Mathias M{\"u}ller, Rico Sennrich
We evaluate all German{--}English submissions to the WMT{'}18 shared translation task, plus a number of submissions from previous years, and find that performance on the task has markedly improved compared to the 2016 WMT submissions (81{\%}→93{\%} accuracy on the WSD task).
1 code implementation • EMNLP 2018 • Gongbo Tang, Mathias Müller, Annette Rios, Rico Sennrich
Recently, non-recurrent architectures (convolutional, self-attentional) have outperformed RNNs in neural machine translation.
no code implementations • LREC 2012 • Annette Rios, Anne G{\"o}hring
This paper describes the process of constructing a trilingual parallel treebank.