no code implementations • 13 Jul 2022 • Sree Hari Krishnan Parthasarathi, Lu Zeng, Christin Jose, Joseph Wang
To train effectively with a mix of human and teacher labeled data, we develop a teacher labeling strategy based on confidence heuristics to reduce entropy on the label distribution from the teacher model; the data is then sampled to match the marginal distribution on the labels.
no code implementations • 15 Jun 2022 • Christin Jose, Joseph Wang, Grant P. Strimel, Mohammad Omar Khursheed, Yuriy Mishchenko, Brian Kulis
We also show that when our approach is used in conjunction with a max-pooling loss, we are able to improve relative false accepts by 25 % at a fixed latency when compared to cross entropy loss.
no code implementations • 29 Sep 2021 • Mohammad Omar Khursheed, Christin Jose, Rajath Kumar, GengShen Fu, Brian Kulis, Santosh Kumar Cheekatmalla
In this work, we propose Tiny-CRNN (Tiny Convolutional Recurrent Neural Network) models applied to the problem of wakeword detection, and augment them with scaled dot product attention.
no code implementations • 25 Nov 2020 • Mohammad Omar Khursheed, Christin Jose, Rajath Kumar, GengShen Fu, Brian Kulis, Santosh Kumar Cheekatmalla
In this work, we propose small footprint Convolutional Recurrent Neural Network models applied to the problem of wakeword detection and augment them with scaled dot product attention.
no code implementations • 9 Aug 2020 • Christin Jose, Yuriy Mishchenko, Thibaud Senechal, Anish Shah, Alex Escott, Shiv Vitaladevuni
In this paper, we propose two new methods for detecting the endpoints of wake words in neural KWS that use single-stage word-level neural networks.