Search Results for author: Yuzong Liu

Found 14 papers, 3 papers with code

On-Device Constrained Self-Supervised Speech Representation Learning for Keyword Spotting via Knowledge Distillation

no code implementations6 Jul 2023 Gene-Ping Yang, Yue Gu, Qingming Tang, Dongsu Du, Yuzong Liu

Our approach used a teacher-student framework to transfer knowledge from a larger, more complex model to a smaller, light-weight model using dual-view cross-correlation distillation and the teacher's codebook as learning objectives.

Keyword Spotting Knowledge Distillation +1

Small-footprint slimmable networks for keyword spotting

no code implementations21 Apr 2023 Zuhaib Akhtar, Mohammad Omar Khursheed, Dongsu Du, Yuzong Liu

In this work, we present Slimmable Neural Networks applied to the problem of small-footprint keyword spotting.

Small-Footprint Keyword Spotting

Self-supervised speech representation learning for keyword-spotting with light-weight transformers

no code implementations7 Mar 2023 Chenyang Gao, Yue Gu, Francesco Caliva, Yuzong Liu

Self-supervised speech representation learning (S3RL) is revolutionizing the way we leverage the ever-growing availability of data.

Keyword Spotting Representation Learning

Fixed-point quantization aware training for on-device keyword-spotting

no code implementations4 Mar 2023 Sashank Macha, Om Oza, Alex Escott, Francesco Caliva, Robbie Armitano, Santosh Kumar Cheekatmalla, Sree Hari Krishnan Parthasarathi, Yuzong Liu

Furthermore, on an in-house KWS dataset, we show that our 8-bit FXP-QAT models have a 4-6% improvement in relative false discovery rate at fixed false reject rate compared to full precision FLP models.

Keyword Spotting Quantization

Sub 8-Bit Quantization of Streaming Keyword Spotting Models for Embedded Chipsets

no code implementations13 Jul 2022 Lu Zeng, Sree Hari Krishnan Parthasarathi, Yuzong Liu, Alex Escott, Santosh Kumar Cheekatmalla, Nikko Strom, Shiv Vitaladevuni

We organize our results in two embedded chipset settings: a) with commodity ARM NEON instruction set and 8-bit containers, we present accuracy, CPU, and memory results using sub 8-bit weights (4, 5, 8-bit) and 8-bit quantization of rest of the network; b) with off-the-shelf neural network accelerators, for a range of weight bit widths (1 and 5-bit), while presenting accuracy results, we project reduction in memory utilization.

Keyword Spotting Quantization

DeCoAR 2.0: Deep Contextualized Acoustic Representations with Vector Quantization

1 code implementation11 Dec 2020 Shaoshi Ling, Yuzong Liu

In speech representation learning, a large amount of unlabeled data is used in a self-supervised manner to learn a feature representation.

Quantization Representation Learning +2

Streaming Language Identification using Combination of Acoustic Representations and ASR Hypotheses

no code implementations1 Jun 2020 Chander Chandak, Zeynab Raeesy, Ariya Rastrow, Yuzong Liu, Xiangyang Huang, Siyu Wang, Dong Kwon Joo, Roland Maas

A common approach to solve multilingual speech recognition is to run multiple monolingual ASR systems in parallel and rely on a language identification (LID) component that detects the input language.

Language Identification speech-recognition +1

BERTphone: Phonetically-Aware Encoder Representations for Utterance-Level Speaker and Language Recognition

1 code implementation30 Jun 2019 Shaoshi Ling, Julian Salazar, Yuzong Liu, Katrin Kirchhoff

We introduce BERTphone, a Transformer encoder trained on large speech corpora that outputs phonetically-aware contextual representation vectors that can be used for both speaker and language recognition.

Avg Representation Learning +2

End-to-end Anchored Speech Recognition

no code implementations6 Feb 2019 Yiming Wang, Xing Fan, I-Fan Chen, Yuzong Liu, Tongfei Chen, Björn Hoffmeister

The anchored segment refers to the wake-up word part of an audio stream, which contains valuable speaker information that can be used to suppress interfering speech and background noise.

Multi-Task Learning speech-recognition +1

Cannot find the paper you are looking for? You can Submit a new open access paper.