Search Results for author: Adnan Haider

Found 6 papers, 0 papers with code

Focused Discriminative Training For Streaming CTC-Trained Automatic Speech Recognition Models

no code implementations23 Aug 2024 Adnan Haider, Xingyu Na, Erik McDermott, Tim Ng, Zhen Huang, Xiaodan Zhuang

This paper introduces a novel training framework called Focused Discriminative Training (FDT) to further improve streaming word-piece end-to-end (E2E) automatic speech recognition (ASR) models trained using either CTC or an interpolation of CTC and attention-based encoder-decoder (AED) loss.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

A Treatise On FST Lattice Based MMI Training

no code implementations17 Oct 2022 Adnan Haider, Tim Ng, Zhen Huang, Xingyu Na, Antti Veikko Rosti

Maximum mutual information (MMI) has become one of the two de facto methods for sequence-level training of speech recognition acoustic models.

speech-recognition Speech Recognition

A Distributed Optimisation Framework Combining Natural Gradient with Hessian-Free for Discriminative Sequence Training

no code implementations12 Mar 2021 Adnan Haider, Chao Zhang, Florian L. Kreyssig, Philip C. Woodland

This paper presents a novel natural gradient and Hessian-free (NGHF) optimisation framework for neural network training that can operate efficiently in a distributed manner.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Combining Natural Gradient with Hessian Free Methods for Sequence Training

no code implementations3 Oct 2018 Adnan Haider, P. C. Woodland

This paper presents a new optimisation approach to train Deep Neural Networks (DNNs) with discriminative sequence criteria.

speech-recognition Speech Recognition

Sequence Training of DNN Acoustic Models With Natural Gradient

no code implementations6 Apr 2018 Adnan Haider, Philip C. Woodland

Deep Neural Network (DNN) acoustic models often use discriminative sequence training that optimises an objective function that better approximates the word error rate (WER) than frame-based training.

Computational Efficiency

A Common Framework for Natural Gradient and Taylor based Optimisation using Manifold Theory

no code implementations26 Mar 2018 Adnan Haider

This technical report constructs a theoretical framework to relate standard Taylor approximation based optimisation methods with Natural Gradient (NG), a method which is Fisher efficient with probabilistic models.

Cannot find the paper you are looking for? You can Submit a new open access paper.