Search Results for author: Ruizhi Li

Found 9 papers, 1 papers with code

Two-Stage Augmentation and Adaptive CTC Fusion for Improved Robustness of Multi-Stream End-to-End ASR

no code implementations5 Feb 2021 Ruizhi Li, Gregory Sell, Hynek Hermansky

Performance degradation of an Automatic Speech Recognition (ASR) system is commonly observed when the test acoustic condition is different from training.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

A practical two-stage training strategy for multi-stream end-to-end speech recognition

no code implementations23 Oct 2019 Ruizhi Li, Gregory Sell, Xiaofei Wang, Shinji Watanabe, Hynek Hermansky

The multi-stream paradigm of audio processing, in which several sources are simultaneously considered, has been an active research area for information fusion.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Multi-Stream End-to-End Speech Recognition

no code implementations17 Jun 2019 Ruizhi Li, Xiaofei Wang, Sri Harish Mallidi, Shinji Watanabe, Takaaki Hori, Hynek Hermansky

Two representative framework have been proposed and discussed, which are Multi-Encoder Multi-Resolution (MEM-Res) framework and Multi-Encoder Multi-Array (MEM-Array) framework, respectively.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Performance Monitoring for End-to-End Speech Recognition

no code implementations9 Apr 2019 Ruizhi Li, Gregory Sell, Hynek Hermansky

Measuring performance of an automatic speech recognition (ASR) system without ground-truth could be beneficial in many scenarios, especially with data from unseen domains, where performance can be highly inconsistent.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Multilingual sequence-to-sequence speech recognition: architecture, transfer learning, and language modeling

no code implementations4 Oct 2018 Jaejin Cho, Murali Karthick Baskar, Ruizhi Li, Matthew Wiesner, Sri Harish Mallidi, Nelson Yalta, Martin Karafiat, Shinji Watanabe, Takaaki Hori

In this work, we attempt to use data from 10 BABEL languages to build a multi-lingual seq2seq model as a prior model, and then port them towards 4 other BABEL languages using transfer learning approach.

Language Modelling Sequence-To-Sequence Speech Recognition +2

Large-scale Artificial Neural Network: MapReduce-based Deep Learning

1 code implementation9 Oct 2015 Kairan Sun, Xu Wei, Gengtao Jia, Risheng Wang, Ruizhi Li

Faced with continuously increasing scale of data, original back-propagation neural network based machine learning algorithm presents two non-trivial challenges: huge amount of data makes it difficult to maintain both efficiency and accuracy; redundant data aggravates the system workload.

Cloud Computing Handwritten Digit Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.