Search Results for author: Mathias Müller

Found 14 papers, 10 papers with code

Voting Booklet Bias: Stance Detection in Swiss Federal Communication

1 code implementation15 Jun 2023 Eric Egli, Noah Mamié, Eyal Liron Dolev, Mathias Müller

We then use our best model to analyze the stance of utterances extracted from the Swiss federal voting booklet concerning the Swiss popular votes of September 2022, which is the main goal of this project.

Stance Detection

SLTUNET: A Simple Unified Model for Sign Language Translation

1 code implementation International Conference on Learning Representations (ICLR) 2023 Biao Zhang, Mathias Müller, Rico Sennrich

We propose SLTUNET, a simple unified neural model designed to support multiple SLTrelated tasks jointly, such as sign-to-gloss, gloss-to-text and sign-to-text translation.

Machine Translation Sign Language Translation +1

Machine Translation between Spoken Languages and Signed Languages Represented in SignWriting

1 code implementation11 Oct 2022 Zifan Jiang, Amit Moryossef, Mathias Müller, Sarah Ebling

This paper presents work on novel machine translation (MT) systems between spoken and signed languages, where signed languages are represented in SignWriting, a sign language writing system.

Machine Translation Sign Language Translation +1

Understanding the Properties of Minimum Bayes Risk Decoding in Neural Machine Translation

1 code implementation ACL 2021 Mathias Müller, Rico Sennrich

Neural Machine Translation (NMT) currently exhibits biases such as producing translations that are too short and overgenerating frequent words, and shows poor robustness to copy noise in training data or domain shift.

Machine Translation NMT +1

Subword Segmentation and a Single Bridge Language Affect Zero-Shot Neural Machine Translation

1 code implementation WMT (EMNLP) 2020 Annette Rios, Mathias Müller, Rico Sennrich

A recent trend in multilingual models is to not train on parallel data between all language pairs, but have a single bridge language, e. g. English.

Machine Translation TAG +1

Backtesting the predictability of COVID-19

1 code implementation22 Jul 2020 Dmitry Gordeev, Philipp Singer, Marios Michailidis, Mathias Müller, SriSatish Ambati

Our work studies the predictive performance of models at various stages of the pandemic to better understand their fundamental uncertainty and the impact of data availability on such forecasts.

Domain Robustness in Neural Machine Translation

2 code implementations AMTA 2020 Mathias Müller, Annette Rios, Rico Sennrich

Domain robustness---the generalization of models to unseen test domains---is low for both statistical (SMT) and neural machine translation (NMT).

Machine Translation NMT +1

A Large-Scale Test Set for the Evaluation of Context-Aware Pronoun Translation in Neural Machine Translation

1 code implementation WS 2018 Mathias Müller, Annette Rios, Elena Voita, Rico Sennrich

We show that, while gains in BLEU are moderate for those systems, they outperform baselines by a large margin in terms of accuracy on our contrastive test set.

Machine Translation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.