Search Results for author: Erfan Loweimi

Found 7 papers, 3 papers with code

RCT: Random Consistency Training for Semi-supervised Sound Event Detection

2 code implementations21 Oct 2021 Nian Shao, Erfan Loweimi, Xiaofei Li

Sound event detection (SED), as a core module of acoustic environmental analysis, suffers from the problem of data deficiency.

Data Augmentation Event Detection +1

Train your classifier first: Cascade Neural Networks Training from upper layers to lower layers

no code implementations9 Feb 2021 Shucong Zhang, Cong-Thanh Do, Rama Doddipatla, Erfan Loweimi, Peter Bell, Steve Renals

Although the lower layers of a deep neural network learn features which are transferable across datasets, these layers are not transferable within the same dataset.

Automatic Speech Recognition speech-recognition

On the Usefulness of Self-Attention for Automatic Speech Recognition with Transformers

no code implementations8 Nov 2020 Shucong Zhang, Erfan Loweimi, Peter Bell, Steve Renals

Self-attention models such as Transformers, which can capture temporal relationships without being limited by the distance between events, have given competitive speech recognition results.

Automatic Speech Recognition speech-recognition

Stochastic Attention Head Removal: A simple and effective method for improving Transformer Based ASR Models

1 code implementation8 Nov 2020 Shucong Zhang, Erfan Loweimi, Peter Bell, Steve Renals

To the best of our knowledge, we have achieved state-of-the-art end-to-end Transformer based model performance on Switchboard and AMI.

Automatic Speech Recognition speech-recognition

When Can Self-Attention Be Replaced by Feed Forward Layers?

no code implementations28 May 2020 Shucong Zhang, Erfan Loweimi, Peter Bell, Steve Renals

Recently, self-attention models such as Transformers have given competitive results compared to recurrent neural network systems in speech recognition.

speech-recognition Speech Recognition

Acoustic Model Adaptation from Raw Waveforms with SincNet

1 code implementation30 Sep 2019 Joachim Fainberg, Ondřej Klejch, Erfan Loweimi, Peter Bell, Steve Renals

Raw waveform acoustic modelling has recently gained interest due to neural networks' ability to learn feature extraction, and the potential for finding better representations for a given scenario than hand-crafted features.

Acoustic Modelling

Top-down training for neural networks

no code implementations25 Sep 2019 Shucong Zhang, Cong-Thanh Do, Rama Doddipatla, Erfan Loweimi, Peter Bell, Steve Renals

Interpreting the top layers as a classifier and the lower layers a feature extractor, one can hypothesize that unwanted network convergence may occur when the classifier has overfit with respect to the feature extractor.

speech-recognition Speech Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.