no code implementations • EMNLP (IWSLT) 2019 • Philipp Wiesenbach, Stefan Riezler
Machine translation of ancient languages faces a low-resource problem, caused by the limited amount of available textual source data and their translations.
no code implementations • EMNLP 2021 • Nathaniel Berger, Stefan Riezler, Sebastian Ebert, Artem Sokolov
Recently more attention has been given to adversarial attacks on neural networks for natural language processing (NLP).
1 code implementation • 6 Nov 2023 • Michael Hagmann, Shigehiko Schamoni, Stefan Riezler
We demonstrate a validity problem of machine learning in the vital application area of disease diagnosis in medicine.
1 code implementation • 30 Aug 2023 • Michael Staniek, Raphael Schumann, Maike Züfle, Stefan Riezler
We present Text-to-OverpassQL, a task designed to facilitate a natural language interface for querying geodata from OpenStreetMap (OSM).
no code implementations • 17 Jul 2023 • Nathaniel Berger, Miriam Exel, Matthias Huck, Stefan Riezler
Supervised learning in Neural Machine Translation (NMT) typically follows a teacher forcing paradigm where reference tokens constitute the conditioning context in the model's prediction, instead of its own previous predictions.
1 code implementation • 17 Jul 2023 • Rebekka Hubert, Artem Sokolov, Stefan Riezler
We present an imitation learning approach where a teacher NMT system corrects the errors of an AST student without relying on manual transcripts.
1 code implementation • 12 Jul 2023 • Raphael Schumann, Wanrong Zhu, Weixi Feng, Tsu-Jui Fu, Stefan Riezler, William Yang Wang
In this work, we propose VELMA, an embodied LLM agent that uses a verbalization of the trajectory and of visual environment observations as contextual prompt for the next action.
no code implementations • 8 Feb 2023 • Michael Hagmann, Philipp Meier, Stefan Riezler
Instead of removing noise, we propose to incorporate several sources of variance, including their interaction with data properties, into an analysis of significance and reliability of machine learning evaluation, with the aim to draw inferences beyond particular instances of trained models.
no code implementations • 27 Oct 2022 • Tsz Kin Lam, Shigehiko Schamoni, Stefan Riezler
Data augmentation is a technique to generate new training data based on existing data.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
2 code implementations • 5 Oct 2022 • Mayumi Ohta, Julia Kreutzer, Stefan Riezler
JoeyS2T is a JoeyNMT extension for speech-to-text tasks such as automatic speech recognition and end-to-end speech translation.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +5
1 code implementation • 1 Sep 2022 • Shigehiko Schamoni, Michael Hagmann, Stefan Riezler
Ensembling neural networks is a long-standing technique for improving the generalization error of neural networks by combining networks with orthogonal properties via a committee decision.
1 code implementation • ACL 2022 • Raphael Schumann, Stefan Riezler
Vision and language navigation (VLN) is a challenging visually-grounded language understanding task.
Ranked #1 on Vision and Language Navigation on map2seq
no code implementations • ACL 2022 • Tsz Kin Lam, Shigehiko Schamoni, Stefan Riezler
End-to-end speech translation relies on data that pair source-language speech inputs with corresponding translations into a target language.
no code implementations • 16 Sep 2021 • Nathaniel Berger, Stefan Riezler, Artem Sokolov, Sebastian Ebert
Recently more attention has been given to adversarial attacks on neural networks for natural language processing (NLP).
no code implementations • 23 Jun 2021 • Michael Hagmann, Stefan Riezler
This paper is an excerpt of an early version of Chapter 2 of the book "Validity, Reliability, and Significance.
no code implementations • ACL (splurobonlp) 2021 • Michael Staniek, Stefan Riezler
In semantic parsing of geographical queries against real-world databases such as OpenStreetMap (OSM), unique correct answers do not necessarily exist.
1 code implementation • 3 Apr 2021 • Tsz Kin Lam, Mayumi Ohta, Shigehiko Schamoni, Stefan Riezler
Our method, called Aligned Data Augmentation (ADA) for ASR, replaces transcribed tokens and the speech representations in an aligned manner to generate previously unseen training pairs.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
no code implementations • ACL 2021 • Raphael Schumann, Stefan Riezler
Car-focused navigation services are based on turns and distances of named streets, whereas navigation instructions naturally used by humans are centered around physical objects called landmarks.
Natural Language Landmark Navigation Instructions Generation
no code implementations • ACL (spnlp) 2021 • Julia Kreutzer, Stefan Riezler, Carolin Lawrence
Large volumes of interaction logs can be collected from NLP systems that are deployed in the real world.
no code implementations • COLING 2020 • Toshitaka Kuwa, Shigehiko Schamoni, Stefan Riezler
Neural approaches to learning term embeddings have led to improved computation of similarity and ranking in information retrieval (IR).
no code implementations • 21 Oct 2020 • Tsz Kin Lam, Shigehiko Schamoni, Stefan Riezler
Direct speech translation describes a scenario where only speech inputs and corresponding translations are available.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
1 code implementation • 2 Jun 2020 • Mayumi Ohta, Nathaniel Berger, Artem Sokolov, Stefan Riezler
Interest in stochastic zeroth-order (SZO) methods has recently been revived in black-box optimization scenarios such as adversarial black-box attacks to deep neural networks.
1 code implementation • EAMT 2020 • Julia Kreutzer, Nathaniel Berger, Stefan Riezler
Sequence-to-sequence learning involves a trade-off between signal strength and annotation cost of training data.
1 code implementation • LREC 2020 • Benjamin Beilharz, Xin Sun, Sariya Karimova, Stefan Riezler
An even larger dataset comprising 547 hours of German speech aligned to German text is available for speech recognition.
no code implementations • 20 Sep 2019 • Shigehiko Schamoni, Holger A. Lindner, Verena Schneider-Lindner, Manfred Thiel, Stefan Riezler
Sepsis is the leading cause of death in non-coronary intensive care units.
8 code implementations • IJCNLP 2019 • Julia Kreutzer, Jasmijn Bastings, Stefan Riezler
We present Joey NMT, a minimalist neural machine translation toolkit based on PyTorch that is specifically designed for novices.
7 code implementations • ACL 2019 • Julia Kreutzer, Stefan Riezler
Not all types of supervision signals are created equal: Different types of feedback have different costs and effects on learning.
1 code implementation • TACL 2019 • Laura Jehl, Carolin Lawrence, Stefan Riezler
We show that bipolar ramp loss objectives outperform other non-bipolar ramp loss objectives and minimum risk training (MRT) on both weakly supervised tasks, as well as on a supervised machine translation task.
no code implementations • WS 2019 • Tsz Kin Lam, Shigehiko Schamoni, Stefan Riezler
We propose an interactive-predictive neural machine translation framework for easier model personalization using reinforcement and imitation learning.
1 code implementation • 29 Nov 2018 • Carolin Lawrence, Stefan Riezler
In semantic parsing for question-answering, it is often too expensive to collect gold parses or even gold answers as supervision signals.
no code implementations • 12 Jun 2018 • Artem Sokolov, Julian Hitschler, Mayumi Ohta, Stefan Riezler
Stochastic zeroth-order (SZO), or gradient-free, optimization allows to optimize arbitrary functions by relying only on function evaluations under parameter perturbations, however, the iteration complexity of SZO methods suffers a factor proportional to the dimensionality of the perturbed function.
1 code implementation • ACL 2018 • Julia Kreutzer, Joshua Uyheng, Stefan Riezler
We present a study on reinforcement learning (RL) from human bandit feedback for sequence-to-sequence learning, exemplified by the task of bandit neural machine translation (NMT).
1 code implementation • 3 May 2018 • Tsz Kin Lam, Julia Kreutzer, Stefan Riezler
We present an approach to interactive-predictive neural machine translation that attempts to reduce human effort from three directions: Firstly, instead of requiring humans to select, correct, or delete segments, we employ the idea of learning from human reinforcements in form of judgments on the quality of partial translations.
1 code implementation • ACL 2018 • Carolin Lawrence, Stefan Riezler
Counterfactual learning from human bandit feedback describes a scenario where user feedback on the quality of outputs of a historic system is logged and used to improve a target system.
no code implementations • NAACL 2018 • Julia Kreutzer, Shahram Khadivi, Evgeny Matusov, Stefan Riezler
We present the first real-world application of methods for improving neural machine translation (NMT) with human reinforcement, based on explicit and implicit user feedback collected on the eBay e-commerce platform.
no code implementations • 13 Dec 2017 • Sariya Karimova, Patrick Simianer, Stefan Riezler
The advantages of neural machine translation (NMT) have been extensively validated for offline translation of several language pairs for different domains of spoken and written language.
no code implementations • 23 Nov 2017 • Carolin Lawrence, Pratik Gajane, Stefan Riezler
Counterfactual learning is a natural scenario to improve web-based machine translation services by offline learning from feedback logged during user interactions.
no code implementations • EMNLP 2017 • Carolin Lawrence, Artem Sokolov, Stefan Riezler
The goal of counterfactual learning for statistical machine translation (SMT) is to optimize a target SMT system from logged data that consist of user feedback to translations that were predicted by another, historic SMT system.
no code implementations • 28 Jul 2017 • Carolin Lawrence, Artem Sokolov, Stefan Riezler
The goal of counterfactual learning for statistical machine translation (SMT) is to optimize a target SMT system from logged data that consist of user feedback to translations that were predicted by another, historic SMT system.
no code implementations • WS 2017 • Artem Sokolov, Julia Kreutzer, Kellen Sunderland, Pavel Danchenko, Witold Szymaniak, Hagen Fürstenau, Stefan Riezler
We introduce and describe the results of a novel shared task on bandit learning for machine translation.
1 code implementation • ACL 2017 • Julia Kreutzer, Artem Sokolov, Stefan Riezler
Bandit structured prediction describes a stochastic optimization framework where learning is performed from partial feedback.
no code implementations • COLING 2016 • Patrick Simianer, Sariya Karimova, Stefan Riezler
Our translation systems may learn from post-edits using several weight, language model and novel translation model adaptation techniques, in part by exploiting the output of the graphical interface.
no code implementations • COLING 2016 • Carolin Lawrence, Stefan Riezler
We present a Natural Language Interface (nlmaps. cl. uni-heidelberg. de) to query OpenStreetMap.
no code implementations • COLING 2016 • Laura Jehl, Stefan Riezler
We present an approach for learning to translate by exploiting cross-lingual link structure in multilingual document collections.
1 code implementation • NeurIPS 2016 • Artem Sokolov, Julia Kreutzer, Christopher Lo, Stefan Riezler
Stochastic structured prediction under bandit feedback follows a learning protocol where on each of a sequence of iterations, the learner receives an input, predicts an output structure, and receives partial feedback in form of a task loss evaluation of the predicted structure.
no code implementations • 18 Jan 2016 • Artem Sokolov, Stefan Riezler, Tanguy Urvoy
We present an application to discriminative reranking in Statistical Machine Translation (SMT) where the learning algorithm only has access to a 1-BLEU loss evaluation of a predicted translation instead of obtaining a gold standard reference translation.
no code implementations • ACL 2016 • Julian Hitschler, Shigehiko Schamoni, Stefan Riezler
We present an approach to improve statistical machine translation of image descriptions by multimodal pivots defined in visual space.