Search Results for author: Adnan Haider

Found 5 papers, 0 papers with code

A Treatise On FST Lattice Based MMI Training

no code implementations17 Oct 2022 Adnan Haider, Tim Ng, Zhen Huang, Xingyu Na, Antti Veikko Rosti

Maximum mutual information (MMI) has become one of the two de facto methods for sequence-level training of speech recognition acoustic models.

speech-recognition Speech Recognition

A Distributed Optimisation Framework Combining Natural Gradient with Hessian-Free for Discriminative Sequence Training

no code implementations12 Mar 2021 Adnan Haider, Chao Zhang, Florian L. Kreyssig, Philip C. Woodland

This paper presents a novel natural gradient and Hessian-free (NGHF) optimisation framework for neural network training that can operate efficiently in a distributed manner.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Combining Natural Gradient with Hessian Free Methods for Sequence Training

no code implementations3 Oct 2018 Adnan Haider, P. C. Woodland

This paper presents a new optimisation approach to train Deep Neural Networks (DNNs) with discriminative sequence criteria.

speech-recognition Speech Recognition

Sequence Training of DNN Acoustic Models With Natural Gradient

no code implementations6 Apr 2018 Adnan Haider, Philip C. Woodland

Deep Neural Network (DNN) acoustic models often use discriminative sequence training that optimises an objective function that better approximates the word error rate (WER) than frame-based training.

Computational Efficiency

A Common Framework for Natural Gradient and Taylor based Optimisation using Manifold Theory

no code implementations26 Mar 2018 Adnan Haider

This technical report constructs a theoretical framework to relate standard Taylor approximation based optimisation methods with Natural Gradient (NG), a method which is Fisher efficient with probabilistic models.

Cannot find the paper you are looking for? You can Submit a new open access paper.