Spoken Command Recognition

5 papers with code • 1 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

ATST: Audio Representation Learning with Teacher-Student Transformer

Audio-WestlakeU/audiossl 26 Apr 2022

Self-supervised learning (SSL) learns knowledge from a large amount of unlabeled data, and then transfers the knowledge to a specific problem with a limited number of labeled data.

Contrastive Learning of General-Purpose Audio Representations

google-research/google-research 21 Oct 2020

We introduce COLA, a self-supervised pre-training approach for learning a general-purpose representation of audio.

SSAST: Self-Supervised Audio Spectrogram Transformer

YuanGongND/ssast 19 Oct 2021

However, pure Transformer models tend to require more training data compared to CNNs, and the success of the AST relies on supervised pretraining that requires a large amount of labeled data and a complex training pipeline, thus limiting the practical usage of AST.

Neural Model Reprogramming with Similarity Based Mapping for Low-Resource Spoken Command Recognition

dodohow1011/speechadvreprogram 8 Oct 2021

In this study, we propose a novel adversarial reprogramming (AR) approach for low-resource spoken command recognition (SCR), and build an AR-SCR system.

Exploiting Low-Rank Tensor-Train Deep Neural Networks Based on Riemannian Gradient Descent With Illustrations of Speech Processing

uwjunqi/ttn-vqc 11 Mar 2022

This work focuses on designing low complexity hybrid tensor networks by considering trade-offs between the model complexity and practical performance.