Search Results for author: Md Akmal Haidar

Found 8 papers, 0 papers with code

Conformer with dual-mode chunked attention for joint online and offline ASR

no code implementations22 Jun 2022 Felix Weninger, Marco Gaudesi, Md Akmal Haidar, Nicola Ferri, Jesús Andrés-Ferrer, Puming Zhan

In the dual-mode Conformer Transducer model, layers can function in online or offline mode while sharing parameters, and in-place knowledge distillation from offline to online mode is applied in training to improve online accuracy.

Knowledge Distillation

RAIL-KD: RAndom Intermediate Layer Mapping for Knowledge Distillation

no code implementations Findings (NAACL) 2022 Md Akmal Haidar, Nithin Anchuri, Mehdi Rezagholizadeh, Abbas Ghaddar, Philippe Langlais, Pascal Poupart

To address these problems, we propose a RAndom Intermediate Layer Knowledge Distillation (RAIL-KD) approach in which, intermediate layers from the teacher model are selected randomly to be distilled into the intermediate layers of the student model.

Knowledge Distillation

Fine-tuning of Pre-trained End-to-end Speech Recognition with Generative Adversarial Networks

no code implementations10 Mar 2021 Md Akmal Haidar, Mehdi Rezagholizadeh

In this paper, we introduce a novel framework for fine-tuning a pre-trained ASR model using the GAN objective where the ASR model acts as a generator and a discriminator tries to distinguish the ASR output from the real data.

speech-recognition Speech Recognition

Semi-Supervised Regression with Generative Adverserial Networks for End to End Learning in Autonomous Driving

no code implementations13 Nov 2018 Mehdi Rezagholizadeh, Md Akmal Haidar

We performed several experiments on a publicly available driving dataset to evaluate our proposed method, and the results are very promising.

Autonomous Driving regression

Cannot find the paper you are looking for? You can Submit a new open access paper.