EdgeCRNN: an edgecomputing oriented model of acoustic feature enhancement for keyword spotting

14 Mar 2021  ·  Yungen Wei, Zheng Gong, Shunzhi Yang, Kai Ye, Yamin Wen ·

Keyword Spotting (KWS) is a significant branch of Automatic Speech Recognition (ASR) and has been widely used in edge computing devices. The goal of KWS is to provide high accuracy with a low False Alarm Rate (FAR), while reducing the costs of memory, computation, and latency. However, limited resources are challenging for KWS applications on edge computing devices. Lightweight models and structures for deep learning have achieved good results in the KWS branch while maintaining efficient performances. In this paper, we present a new Convolutional Recurrent Neural Network (CRNN) architecture named EdgeCRNN for edge computing devices. EdgeCRNN, which is based on depthwise separable convolution and residual structure, uses a feature enhanced method. On the Google Speech Commands Dataset, the experimental results depict that EdgeCRNN can test 11.1 audio data per second on Raspberry Pi 3B+, which is 2.2 times than that of Tpool2. Compared with Tpool2, the accuracy of EdgeCRNN reaches 98.05% whilst its performance is also competitive.

PDF

Datasets


Results from the Paper


Ranked #8 on Keyword Spotting on Google Speech Commands (Google Speech Commands V2 12 metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Keyword Spotting Google Speech Commands EdgeCRNN 2.0× Google Speech Commands V2 12 98.05 # 8

Methods