A Hybrid CNN-BiLSTM Voice Activity Detector

5 Mar 2021  ·  Nicholas Wilkinson, Thomas Niesler ·

This paper presents a new hybrid architecture for voice activity detection (VAD) incorporating both convolutional neural network (CNN) and bidirectional long short-term memory (BiLSTM) layers trained in an end-to-end manner. In addition, we focus specifically on optimising the computational efficiency of our architecture in order to deliver robust performance in difficult in-the-wild noise conditions in a severely under-resourced setting. Nested k-fold cross-validation was used to explore the hyperparameter space, and the trade-off between optimal parameters and model size is discussed. The performance effect of a BiLSTM layer compared to a unidirectional LSTM layer was also considered. We compare our systems with three established baselines on the AVA-Speech dataset. We find that significantly smaller models with near optimal parameters perform on par with larger models trained with optimal parameters. BiLSTM layers were shown to improve accuracy over unidirectional layers by $\approx$2% absolute on average. With an area under the curve (AUC) of 0.951, our system outperforms all baselines, including a much larger ResNet system, particularly in difficult noise conditions.

PDF Abstract

Datasets


Results from the Paper


 Ranked #1 on Activity Detection on AVA-Speech (ROC-AUC metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Activity Detection AVA-Speech CNN-BiLSTM_best ROC-AUC 95.14 # 1
Activity Detection AVA-Speech CNN-BiLSTM_small ROC-AUC 95.13 # 2

Methods