Search Results for author: Murali Karthick Baskar

Found 8 papers, 0 papers with code

BUT Opensat 2019 Speech Recognition System

no code implementations30 Jan 2020 Martin Karafiát, Murali Karthick Baskar, Igor Szöke, Hari Krishna Vydana, Karel Veselý, Jan "Honza'' Černocký

The paper describes the BUT Automatic Speech Recognition (ASR) systems submitted for OpenSAT evaluations under two domain categories such as low resourced languages and public safety communications.

automatic-speech-recognition Data Augmentation +1

Analysis of Multilingual Sequence-to-Sequence speech recognition systems

no code implementations7 Nov 2018 Martin Karafiát, Murali Karthick Baskar, Shinji Watanabe, Takaaki Hori, Matthew Wiesner, Jan "Honza'' Černocký

This paper investigates the applications of various multilingual approaches developed in conventional hidden Markov model (HMM) systems to sequence-to-sequence (seq2seq) automatic speech recognition (ASR).

automatic-speech-recognition Sequence-To-Sequence Speech Recognition +1

Promising Accurate Prefix Boosting for sequence-to-sequence ASR

no code implementations7 Nov 2018 Murali Karthick Baskar, Lukáš Burget, Shinji Watanabe, Martin Karafiát, Takaaki Hori, Jan Honza Černocký

In this paper, we present promising accurate prefix boosting (PAPB), a discriminative training technique for attention based sequence-to-sequence (seq2seq) ASR.

Transfer learning of language-independent end-to-end ASR with language model fusion

no code implementations6 Nov 2018 Hirofumi Inaguma, Jaejin Cho, Murali Karthick Baskar, Tatsuya Kawahara, Shinji Watanabe

This work explores better adaptation methods to low-resource languages using an external language model (LM) under the framework of transfer learning.

End-To-End Speech Recognition Language Modelling +1

Multilingual sequence-to-sequence speech recognition: architecture, transfer learning, and language modeling

no code implementations4 Oct 2018 Jaejin Cho, Murali Karthick Baskar, Ruizhi Li, Matthew Wiesner, Sri Harish Mallidi, Nelson Yalta, Martin Karafiat, Shinji Watanabe, Takaaki Hori

In this work, we attempt to use data from 10 BABEL languages to build a multi-lingual seq2seq model as a prior model, and then port them towards 4 other BABEL languages using transfer learning approach.

Language Modelling Sequence-To-Sequence Speech Recognition +1

Residual Memory Networks: Feed-forward approach to learn long temporal dependencies

no code implementations6 Aug 2018 Murali Karthick Baskar, Martin Karafiat, Lukas Burget, Karel Vesely, Frantisek Grezl, Jan Honza Cernocky

In this paper we propose a residual memory neural network (RMN) architecture to model short-time dependencies using deep feed-forward layers having residual and time delayed connections.

Large Vocabulary Continuous Speech Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.