Search Results for author: Rama Doddipatla

Found 18 papers, 1 papers with code

Multiple-hypothesis RNN-T Loss for Unsupervised Fine-tuning and Self-training of Neural Transducer

no code implementations29 Jul 2022 Cong-Thanh Do, Mohan Li, Rama Doddipatla

The multiple-hypothesis approach yields a relative reduction of 3. 3% WER on the CHiME-4's single-channel real noisy evaluation set when compared with the single-hypothesis approach.

Automatic Speech Recognition speech-recognition

Dialogue Strategy Adaptation to New Action Sets Using Multi-dimensional Modelling

no code implementations14 Apr 2022 Simon Keizer, Norbert Braunschweiler, Svetlana Stoyanchev, Rama Doddipatla

A major bottleneck for building statistical spoken dialogue systems for new domains and applications is the need for large amounts of training data.

Dialogue Management Management +2

Transformer-based Streaming ASR with Cumulative Attention

no code implementations11 Mar 2022 Mohan Li, Shucong Zhang, Catalin Zorila, Rama Doddipatla

In this paper, we propose an online attention mechanism, known as cumulative attention (CA), for streaming Transformer-based automatic speech recognition (ASR).

Automatic Speech Recognition speech-recognition

A study on cross-corpus speech emotion recognition and data augmentation

no code implementations10 Jan 2022 Norbert Braunschweiler, Rama Doddipatla, Simon Keizer, Svetlana Stoyanchev

Models trained on mixed corpora can be more stable in mismatched contexts, and the performance reductions range from 1 to 8% when compared with single corpus models in matched conditions.

Data Augmentation Speech Emotion Recognition

Monaural source separation: From anechoic to reverberant environments

no code implementations15 Nov 2021 Tobias Cord-Landwehr, Christoph Boeddeker, Thilo von Neumann, Catalin Zorila, Rama Doddipatla, Reinhold Haeb-Umbach

Impressive progress in neural network-based single-channel speech source separation has been made in recent years.

Towards Handling Unconstrained User Preferences in Dialogue

no code implementations17 Sep 2021 Suraj Pandey, Svetlana Stoyanchev, Rama Doddipatla

A user input to a schema-driven dialogue information navigation system, such as venue search, is typically constrained by the underlying database which restricts the user to specify a predefined set of preferences, or slots, corresponding to the database fields.

Information Retrieval

Teacher-Student MixIT for Unsupervised and Semi-supervised Speech Separation

no code implementations15 Jun 2021 Jisi Zhang, Catalin Zorila, Rama Doddipatla, Jon Barker

The proposed method first uses mixtures of unseparated sources and the mixture invariant training (MixIT) criterion to train a teacher model.

Speech Separation

Head-synchronous Decoding for Transformer-based Streaming ASR

no code implementations26 Apr 2021 Mohan Li, Catalin Zorila, Rama Doddipatla

Online Transformer-based automatic speech recognition (ASR) systems have been extensively studied due to the increasing demand for streaming applications.

Automatic Speech Recognition speech-recognition

Multiple-hypothesis CTC-based semi-supervised adaptation of end-to-end speech recognition

no code implementations29 Mar 2021 Cong-Thanh Do, Rama Doddipatla, Thomas Hain

In this method, multiple automatic speech recognition (ASR) 1-best hypotheses are integrated in the computation of the connectionist temporal classification (CTC) loss function.

Automatic Speech Recognition speech-recognition

Train your classifier first: Cascade Neural Networks Training from upper layers to lower layers

no code implementations9 Feb 2021 Shucong Zhang, Cong-Thanh Do, Rama Doddipatla, Erfan Loweimi, Peter Bell, Steve Renals

Although the lower layers of a deep neural network learn features which are transferable across datasets, these layers are not transferable within the same dataset.

Automatic Speech Recognition speech-recognition

Time-Domain Speech Extraction with Spatial Information and Multi Speaker Conditioning Mechanism

no code implementations7 Feb 2021 Jisi Zhang, Catalin Zorila, Rama Doddipatla, Jon Barker

In this paper, we present a novel multi-channel speech extraction system to simultaneously extract multiple clean individual sources from a mixture in noisy and reverberant environments.

Speech Extraction speech-recognition +1

On End-to-end Multi-channel Time Domain Speech Separation in Reverberant Environments

no code implementations11 Nov 2020 Jisi Zhang, Catalin Zorila, Rama Doddipatla, Jon Barker

To reduce the influence of reverberation on spatial feature extraction, a dereverberation pre-processing method has been applied to further improve the separation performance.

speech-recognition Speech Recognition +1

Action State Update Approach to Dialogue Management

no code implementations9 Nov 2020 Svetlana Stoyanchev, Simon Keizer, Rama Doddipatla

Utterance interpretation is one of the main functions of a dialogue manager, which is the key component of a dialogue system.

Active Learning Dialogue Management +2

An Investigation into the Effectiveness of Enhancement in ASR Training and Test for CHiME-5 Dinner Party Transcription

1 code implementation26 Sep 2019 Catalin Zorila, Christoph Boeddeker, Rama Doddipatla, Reinhold Haeb-Umbach

Despite the strong modeling power of neural network acoustic models, speech enhancement has been shown to deliver additional word error rate improvements if multi-channel data is available.

Speech Enhancement

Top-down training for neural networks

no code implementations25 Sep 2019 Shucong Zhang, Cong-Thanh Do, Rama Doddipatla, Erfan Loweimi, Peter Bell, Steve Renals

Interpreting the top layers as a classifier and the lower layers a feature extractor, one can hypothesize that unwanted network convergence may occur when the classifier has overfit with respect to the feature extractor.

speech-recognition Speech Recognition

The USFD Spoken Language Translation System for IWSLT 2014

no code implementations13 Sep 2015 Raymond W. M. Ng, Mortaza Doulaty, Rama Doddipatla, Wilker Aziz, Kashif Shah, Oscar Saz, Madina Hasan, Ghada Alharbi, Lucia Specia, Thomas Hain

The USFD primary system incorporates state-of-the-art ASR and MT techniques and gives a BLEU score of 23. 45 and 14. 75 on the English-to-French and English-to-German speech-to-text translation task with the IWSLT 2014 data.

Automatic Speech Recognition Machine Translation +3

Cannot find the paper you are looking for? You can Submit a new open access paper.