Search Results for author: Mathias Müller

Found 19 papers, 15 papers with code

Recursive Learning of Asymptotic Variational Objectives

no code implementations4 Nov 2024 Alessandro Mastrototaro, Mathias Müller, Jimmy Olsson

General state-space models (SSMs) are widely used in statistical machine learning and are among the most classical generative models for sequential time-series data.

State Space Models Variational Inference

SignCLIP: Connecting Text and Sign Language by Contrastive Learning

1 code implementation1 Jul 2024 Zifan Jiang, Gerard Sant, Amit Moryossef, Mathias Müller, Rico Sennrich, Sarah Ebling

We present SignCLIP, which re-purposes CLIP (Contrastive Language-Image Pretraining) to project spoken language text and sign language videos, two classes of natural languages of distinct modalities, into the same space.

Contrastive Learning Sign Language Recognition +2

JWSign: A Highly Multilingual Corpus of Bible Translations for more Diversity in Sign Language Processing

1 code implementation16 Nov 2023 Shester Gueuwou, Sophie Siake, Colin Leong, Mathias Müller

Advancements in sign language processing have been hindered by a lack of sufficient data, impeding progress in recognition, translation, and production tasks.

Diversity Machine Translation +2

pose-format: Library for Viewing, Augmenting, and Handling .pose Files

1 code implementation13 Oct 2023 Amit Moryossef, Mathias Müller, Rebecka Fahrni

The library includes a specialized file format that encapsulates various types of pose data, accommodating multiple individuals and an indefinite number of time frames, thus proving its utility for both image and video data.

Benchmarking Management

Voting Booklet Bias: Stance Detection in Swiss Federal Communication

1 code implementation15 Jun 2023 Eric Egli, Noah Mamié, Eyal Liron Dolev, Mathias Müller

We then use our best model to analyze the stance of utterances extracted from the Swiss federal voting booklet concerning the Swiss popular votes of September 2022, which is the main goal of this project.

Stance Detection

SLTUNET: A Simple Unified Model for Sign Language Translation

1 code implementation International Conference on Learning Representations (ICLR) 2023 Biao Zhang, Mathias Müller, Rico Sennrich

We propose SLTUNET, a simple unified neural model designed to support multiple SLTrelated tasks jointly, such as sign-to-gloss, gloss-to-text and sign-to-text translation.

Machine Translation Sign Language Translation +1

Machine Translation between Spoken Languages and Signed Languages Represented in SignWriting

1 code implementation11 Oct 2022 Zifan Jiang, Amit Moryossef, Mathias Müller, Sarah Ebling

This paper presents work on novel machine translation (MT) systems between spoken and signed languages, where signed languages are represented in SignWriting, a sign language writing system.

Machine Translation Sign Language Translation +1

Understanding the Properties of Minimum Bayes Risk Decoding in Neural Machine Translation

1 code implementation ACL 2021 Mathias Müller, Rico Sennrich

Neural Machine Translation (NMT) currently exhibits biases such as producing translations that are too short and overgenerating frequent words, and shows poor robustness to copy noise in training data or domain shift.

Machine Translation NMT +1

Subword Segmentation and a Single Bridge Language Affect Zero-Shot Neural Machine Translation

1 code implementation WMT (EMNLP) 2020 Annette Rios, Mathias Müller, Rico Sennrich

A recent trend in multilingual models is to not train on parallel data between all language pairs, but have a single bridge language, e. g. English.

Machine Translation Segmentation +2

Backtesting the predictability of COVID-19

1 code implementation22 Jul 2020 Dmitry Gordeev, Philipp Singer, Marios Michailidis, Mathias Müller, SriSatish Ambati

Our work studies the predictive performance of models at various stages of the pandemic to better understand their fundamental uncertainty and the impact of data availability on such forecasts.

Domain Robustness in Neural Machine Translation

2 code implementations AMTA 2020 Mathias Müller, Annette Rios, Rico Sennrich

Domain robustness---the generalization of models to unseen test domains---is low for both statistical (SMT) and neural machine translation (NMT).

Machine Translation NMT +1

A Large-Scale Test Set for the Evaluation of Context-Aware Pronoun Translation in Neural Machine Translation

1 code implementation WS 2018 Mathias Müller, Annette Rios, Elena Voita, Rico Sennrich

We show that, while gains in BLEU are moderate for those systems, they outperform baselines by a large margin in terms of accuracy on our contrastive test set.

Machine Translation Sentence +1

Cannot find the paper you are looking for? You can Submit a new open access paper.