Search Results for author: Shiv Vitaladevuni

Found 6 papers, 0 papers with code

Max-Pooling Loss Training of Long Short-Term Memory Networks for Small-Footprint Keyword Spotting

no code implementations5 May 2017 Ming Sun, Anirudh Raju, George Tucker, Sankaran Panchapagesan, Geng-Shen Fu, Arindam Mandal, Spyros Matsoukas, Nikko Strom, Shiv Vitaladevuni

Finally, the max-pooling loss trained LSTM initialized with a cross-entropy pre-trained network shows the best performance, which yields $67. 6\%$ relative reduction compared to baseline feed-forward DNN in Area Under the Curve (AUC) measure.

Small-Footprint Keyword Spotting

Accurate Detection of Wake Word Start and End Using a CNN

no code implementations9 Aug 2020 Christin Jose, Yuriy Mishchenko, Thibaud Senechal, Anish Shah, Alex Escott, Shiv Vitaladevuni

In this paper, we propose two new methods for detecting the endpoints of wake words in neural KWS that use single-stage word-level neural networks.

On Front-end Gain Invariant Modeling for Wake Word Spotting

no code implementations13 Oct 2020 Yixin Gao, Noah D. Stein, Chieh-Chi Kao, Yunliang Cai, Ming Sun, Tao Zhang, Shiv Vitaladevuni

Since the WW model is trained with the AFE-processed audio data, its performance is sensitive to AFE variations, such as gain changes.

Towards Data-efficient Modeling for Wake Word Spotting

no code implementations13 Oct 2020 Yixin Gao, Yuriy Mishchenko, Anish Shah, Spyros Matsoukas, Shiv Vitaladevuni

Wake word (WW) spotting is challenging in far-field not only because of the interference in signal transmission but also the complexity in acoustic environments.

Data Augmentation

Sub 8-Bit Quantization of Streaming Keyword Spotting Models for Embedded Chipsets

no code implementations13 Jul 2022 Lu Zeng, Sree Hari Krishnan Parthasarathi, Yuzong Liu, Alex Escott, Santosh Kumar Cheekatmalla, Nikko Strom, Shiv Vitaladevuni

We organize our results in two embedded chipset settings: a) with commodity ARM NEON instruction set and 8-bit containers, we present accuracy, CPU, and memory results using sub 8-bit weights (4, 5, 8-bit) and 8-bit quantization of rest of the network; b) with off-the-shelf neural network accelerators, for a range of weight bit widths (1 and 5-bit), while presenting accuracy results, we project reduction in memory utilization.

Keyword Spotting Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.