Search Results for author: Jan Niehues

Found 85 papers, 9 papers with code

Effective combination of pretrained models - KIT@IWSLT2022

no code implementations IWSLT (ACL) 2022 Ngoc-Quan Pham, Tuan Nam Nguyen, Thai-Binh Nguyen, Danni Liu, Carlos Mullov, Jan Niehues, Alexander Waibel

Pretrained models in acoustic and textual modalities can potentially improve speech translation for both Cascade and End-to-end approaches.

Translation

Findings of the IWSLT 2022 Evaluation Campaign

no code implementations IWSLT (ACL) 2022 Antonios Anastasopoulos, Loïc Barrault, Luisa Bentivogli, Marcely Zanon Boito, Ondřej Bojar, Roldano Cattoni, Anna Currey, Georgiana Dinu, Kevin Duh, Maha Elbayad, Clara Emmanuel, Yannick Estève, Marcello Federico, Christian Federmann, Souhir Gahbiche, Hongyu Gong, Roman Grundkiewicz, Barry Haddow, Benjamin Hsu, Dávid Javorský, Vĕra Kloudová, Surafel Lakew, Xutai Ma, Prashant Mathur, Paul McNamee, Kenton Murray, Maria Nǎdejde, Satoshi Nakamura, Matteo Negri, Jan Niehues, Xing Niu, John Ortega, Juan Pino, Elizabeth Salesky, Jiatong Shi, Matthias Sperber, Sebastian Stüker, Katsuhito Sudoh, Marco Turchi, Yogesh Virkar, Alexander Waibel, Changhan Wang, Shinji Watanabe

The evaluation campaign of the 19th International Conference on Spoken Language Translation featured eight shared tasks: (i) Simultaneous speech translation, (ii) Offline speech translation, (iii) Speech to speech translation, (iv) Low-resource speech translation, (v) Multilingual speech translation, (vi) Dialect speech translation, (vii) Formality control for speech translation, (viii) Isometric speech translation.

Speech-to-Speech Translation Translation

The IWSLT 2019 Evaluation Campaign

no code implementations EMNLP (IWSLT) 2019 Jan Niehues, Rolando Cattoni, Sebastian Stüker, Matteo Negri, Marco Turchi, Thanh-Le Ha, Elizabeth Salesky, Ramon Sanabria, Loic Barrault, Lucia Specia, Marcello Federico

The IWSLT 2019 evaluation campaign featured three tasks: speech translation of (i) TED talks and (ii) How2 instructional videos from English into German and Portuguese, and (iii) text translation of TED talks from English into Czech.

Translation

FINDINGS OF THE IWSLT 2021 EVALUATION CAMPAIGN

no code implementations ACL (IWSLT) 2021 Antonios Anastasopoulos, Ondřej Bojar, Jacob Bremerman, Roldano Cattoni, Maha Elbayad, Marcello Federico, Xutai Ma, Satoshi Nakamura, Matteo Negri, Jan Niehues, Juan Pino, Elizabeth Salesky, Sebastian Stüker, Katsuhito Sudoh, Marco Turchi, Alexander Waibel, Changhan Wang, Matthew Wiesner

The evaluation campaign of the International Conference on Spoken Language Translation (IWSLT 2021) featured this year four shared tasks: (i) Simultaneous speech translation, (ii) Offline speech translation, (iii) Multilingual speech translation, (iv) Low-resource speech translation.

Translation

Maastricht University’s Large-Scale Multilingual Machine Translation System for WMT 2021

no code implementations WMT (EMNLP) 2021 Danni Liu, Jan Niehues

We present our development of the multilingual machine translation system for the large-scale multilingual machine translation task at WMT 2021.

Machine Translation Translation

Toward Robust Neural Machine Translation for Noisy Input Sequences

no code implementations IWSLT 2017 Matthias Sperber, Jan Niehues, Alex Waibel

We note that unlike our baseline model, models trained on noisy data are able to generate outputs of proper length even for noisy inputs, while gradually reducing output length for higher amount of noise, as might also be expected from a human translator.

Machine Translation Translation

Domain-independent Punctuation and Segmentation Insertion

no code implementations IWSLT 2017 Eunah Cho, Jan Niehues, Alex Waibel

Experiments show that generalizing rare and unknown words greatly improves the punctuation insertion performance, reaching up to 8. 8 points of improvement in F-score when applied to the out-of-domain test scenario.

Machine Translation POS +1

KIT’s Multilingual Neural Machine Translation systems for IWSLT 2017

no code implementations IWSLT 2017 Ngoc-Quan Pham, Matthias Sperber, Elizabeth Salesky, Thanh-Le Ha, Jan Niehues, Alexander Waibel

For the SLT track, in addition to a monolingual neural translation system used to generate correct punctuations and true cases of the data prior to training our multilingual system, we introduced a noise model in order to make our system more robust.

Machine Translation Translation

The IWSLT 2018 Evaluation Campaign

no code implementations IWSLT (EMNLP) 2018 Jan Niehues, Rolando Cattoni, Sebastian Stüker, Mauro Cettolo, Marco Turchi, Marcello Federico

The International Workshop of Spoken Language Translation (IWSLT) 2018 Evaluation Campaign featured two tasks: low-resource machine translation and speech translation.

Machine Translation Translation

Adaptive multilingual speech recognition with pretrained models

no code implementations24 May 2022 Ngoc-Quan Pham, Alex Waibel, Jan Niehues

Multilingual speech recognition with supervised learning has achieved great results as reflected in recent research.

Speech Recognition

LibriS2S: A German-English Speech-to-Speech Translation Corpus

1 code implementation22 Apr 2022 Pedro Jeuris, Jan Niehues

In contrast, the activities in the area of speech-to-speech translation is still limited, although it is essential to overcome the language barrier.

Speech-to-Speech Translation Speech-to-Text Translation +1

Multilingual Simultaneous Speech Translation

no code implementations28 Mar 2022 Shashank Subramanya, Jan Niehues

Based on a technique to adapt end-to-end monolingual models, we investigate multilingual models and different architectures (end-to-end and cascade) on the ability to perform online speech translation.

Translation

Tackling data scarcity in speech translation using zero-shot multilingual machine translation techniques

1 code implementation26 Jan 2022 Tu Anh Dinh, Danni Liu, Jan Niehues

We investigate whether these ideas can be applied to speech translation, by building ST models trained on speech transcription and text translation data.

Data Augmentation Machine Translation +1

Cost-Effective Training in Low-Resource Neural Machine Translation

no code implementations14 Jan 2022 Sai Koneru, Danni Liu, Jan Niehues

Although AL is shown to be helpful with large budgets, it is not enough to build high-quality translation systems in these low-resource conditions.

Active Learning Domain Adaptation +2

Tutorial Proposal: End-to-End Speech Translation

no code implementations EACL 2021 Jan Niehues, Elizabeth Salesky, Marco Turchi, Matteo Negri

Speech translation is the translation of speech in one language typically to text in another, traditionally accomplished through a combination of automatic speech recognition and machine translation.

Automatic Speech Recognition Machine Translation +1

Continuous Learning in Neural Machine Translation using Bilingual Dictionaries

no code implementations EACL 2021 Jan Niehues

For humans, as well as for machine translation, bilingual dictionaries are a promising knowledge source to continuously integrate new knowledge.

Machine Translation One-Shot Learning +1

Improving Zero-Shot Translation by Disentangling Positional Information

1 code implementation ACL 2021 Danni Liu, Jan Niehues, James Cross, Francisco Guzmán, Xian Li

The difficulty of generalizing to new translation directions suggests the model representations are highly specific to those language pairs seen in training.

Machine Translation Translation

FINDINGS OF THE IWSLT 2020 EVALUATION CAMPAIGN

no code implementations WS 2020 Ebrahim Ansari, Amittai Axelrod, Nguyen Bach, Ond{\v{r}}ej Bojar, Roldano Cattoni, Fahim Dalvi, Nadir Durrani, Marcello Federico, Christian Federmann, Jiatao Gu, Fei Huang, Kevin Knight, Xutai Ma, Ajay Nagesh, Matteo Negri, Jan Niehues, Juan Pino, Elizabeth Salesky, Xing Shi, Sebastian St{\"u}ker, Marco Turchi, Alex Waibel, er, Changhan Wang

The evaluation campaign of the International Conference on Spoken Language Translation (IWSLT 2020) featured this year six challenge tracks: (i) Simultaneous speech translation, (ii) Video speech translation, (iii) Offline speech translation, (iv) Conversational speech translation, (v) Open domain translation, and (vi) Non-native speech translation.

Translation

Adapting End-to-End Speech Recognition for Readable Subtitles

1 code implementation WS 2020 Danni Liu, Jan Niehues, Gerasimos Spanakis

The experiments show that with limited data far less than needed for training a model from scratch, we can adapt a Transformer-based ASR model to incorporate both transcription and compression capabilities.

Automatic Speech Recognition

Relative Positional Encoding for Speech Recognition and Direct Translation

no code implementations20 May 2020 Ngoc-Quan Pham, Thanh-Le Ha, Tuan-Nam Nguyen, Thai-Son Nguyen, Elizabeth Salesky, Sebastian Stueker, Jan Niehues, Alexander Waibel

We also show that this model is able to better utilize synthetic data than the Transformer, and adapts better to variable sentence segmentation quality for speech translation.

Sentence segmentation Speech Recognition +1

Low Latency ASR for Simultaneous Speech Translation

no code implementations22 Mar 2020 Thai Son Nguyen, Jan Niehues, Eunah Cho, Thanh-Le Ha, Kevin Kilgour, Markus Muller, Matthias Sperber, Sebastian Stueker, Alex Waibel

User studies have shown that reducing the latency of our simultaneous lecture translation system should be the most important goal.

Automatic Speech Recognition Translation

Modeling Confidence in Sequence-to-Sequence Models

no code implementations WS 2019 Jan Niehues, Ngoc-Quan Pham

We show improvements on segment-level confidence estimation as well as on confidence estimation for source tokens.

Automatic Speech Recognition Machine Translation +2

Incremental processing of noisy user utterances in the spoken language understanding task

no code implementations WS 2019 Stefan Constantin, Jan Niehues, Alex Waibel

The state-of-the-art neural network architectures make it possible to create spoken language understanding systems with high quality and fast processing time.

Natural Language Understanding Spoken Language Understanding

Very Deep Self-Attention Networks for End-to-End Speech Recognition

no code implementations30 Apr 2019 Ngoc-Quan Pham, Thai-Son Nguyen, Jan Niehues, Markus Müller, Sebastian Stüker, Alexander Waibel

Recently, end-to-end sequence-to-sequence models for speech recognition have gained significant interest in the research community.

Speech Recognition

Attention-Passing Models for Robust and Data-Efficient End-to-End Speech Translation

no code implementations TACL 2019 Matthias Sperber, Graham Neubig, Jan Niehues, Alex Waibel

Speech translation has traditionally been approached through cascaded models consisting of a speech recognizer trained on a corpus of transcribed speech, and a machine translation system trained on parallel texts.

Machine Translation Speech Recognition +1

Multi-task learning to improve natural language understanding

no code implementations17 Dec 2018 Stefan Constantin, Jan Niehues, Alex Waibel

When building a neural network-based Natural Language Understanding component, one main challenge is to collect enough training data.

Multi-Task Learning Natural Language Understanding

Optimizing Segmentation Granularity for Neural Machine Translation

no code implementations19 Oct 2018 Elizabeth Salesky, Andrew Runge, Alex Coda, Jan Niehues, Graham Neubig

However, the granularity of these subword units is a hyperparameter to be tuned for each language and task, using methods such as grid search.

Machine Translation Translation

Low-Latency Neural Speech Translation

no code implementations1 Aug 2018 Jan Niehues, Ngoc-Quan Pham, Thanh-Le Ha, Matthias Sperber, Alex Waibel

After adaptation, we are able to reduce the number of corrections displayed during incremental output construction by 45%, without a decrease in translation quality.

Machine Translation Multi-Task Learning +1

A Hierarchical Approach to Neural Context-Aware Modeling

no code implementations27 Jul 2018 Patrick Huber, Jan Niehues, Alex Waibel

Our approach overcomes recent limitations with extended narratives through a multi-layered computational approach to generate an abstract context representation.

Language Modelling

Robust and Scalable Differentiable Neural Computer for Question Answering

1 code implementation WS 2018 Jörg Franke, Jan Niehues, Alex Waibel

Deep learning models are often not easily adaptable to new tasks and require task-specific adjustments.

Question Answering

Self-Attentional Acoustic Models

1 code implementation26 Mar 2018 Matthias Sperber, Jan Niehues, Graham Neubig, Sebastian Stüker, Alex Waibel

Self-attention is a method of encoding sequences of vectors by relating these vectors to each-other based on pairwise similarities.

Automated Evaluation of Out-of-Context Errors

1 code implementation LREC 2018 Patrick Huber, Jan Niehues, Alex Waibel

We present a new approach to evaluate computational models for the task of text understanding by the means of out-of-context error detection.

Language Modelling Translation

An End-to-End Goal-Oriented Dialog System with a Generative Natural Language Response Generation

no code implementations6 Mar 2018 Stefan Constantin, Jan Niehues, Alex Waibel

Furthermore, by using a feedforward neural network, we are able to generate the output word by word and are no longer restricted to a fixed number of possible response candidates.

Goal-Oriented Dialog Response Generation

Effective Strategies in Zero-Shot Neural Machine Translation

1 code implementation IWSLT 2017 Thanh-Le Ha, Jan Niehues, Alexander Waibel

In this paper, we proposed two strategies which can be applied to a multilingual neural machine translation system in order to better tackle zero-shot scenarios despite not having any parallel corpus.

Machine Translation Translation

Transcribing Against Time

no code implementations15 Sep 2017 Matthias Sperber, Graham Neubig, Jan Niehues, Satoshi Nakamura, Alex Waibel

We investigate the problem of manually correcting errors from an automatic speech transcript in a cost-sensitive fashion.

Comparison of Decoding Strategies for CTC Acoustic Models

no code implementations15 Aug 2017 Thomas Zenkel, Ramon Sanabria, Florian Metze, Jan Niehues, Matthias Sperber, Sebastian Stüker, Alex Waibel

The CTC loss function maps an input sequence of observable feature vectors to an output sequence of symbols.

Speech Recognition

Exploiting Linguistic Resources for Neural Machine Translation Using Multi-task Learning

no code implementations WS 2017 Jan Niehues, Eunah Cho

Linguistic resources such as part-of-speech (POS) tags have been extensively used in statistical machine translation (SMT) frameworks and have yielded better performances.

Machine Translation Multi-Task Learning +3

Analyzing Neural MT Search and Model Performance

no code implementations WS 2017 Jan Niehues, Eunah Cho, Thanh-Le Ha, Alex Waibel

By separating the search space and the modeling using $n$-best list reranking, we analyze the influence of both parts of an NMT system independently.

Translation

Neural Lattice-to-Sequence Models for Uncertain Inputs

no code implementations EMNLP 2017 Matthias Sperber, Graham Neubig, Jan Niehues, Alex Waibel

In this work, we extend the TreeLSTM (Tai et al., 2015) into a LatticeLSTM that is able to consume word lattices, and can be used as encoder in an attentional encoder-decoder model.

Translation

Lightly Supervised Quality Estimation

no code implementations COLING 2016 Matthias Sperber, Graham Neubig, Jan Niehues, Sebastian St{\"u}ker, Alex Waibel

Evaluating the quality of output from language processing systems such as machine translation or speech recognition is an essential step in ensuring that they are sufficient for practical use.

Automatic Speech Recognition Machine Translation +1

Toward Multilingual Neural Machine Translation with Universal Encoder and Decoder

no code implementations IWSLT 2016 Thanh-Le Ha, Jan Niehues, Alexander Waibel

In this paper, we present our first attempts in building a multilingual Neural Machine Translation framework under a unified approach.

Machine Translation Translation

Lexical Translation Model Using a Deep Neural Network Architecture

no code implementations28 Apr 2015 Thanh-Le Ha, Jan Niehues, Alex Waibel

In this paper we combine the advantages of a model using global source sentence contexts, the Discriminative Word Lexicon, and neural networks.

Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.