Spoken Command Recognition
5 papers with code • 1 benchmarks • 0 datasets
Self-supervised learning (SSL) learns knowledge from a large amount of unlabeled data, and then transfers the knowledge to a specific problem with a limited number of labeled data.
However, pure Transformer models tend to require more training data compared to CNNs, and the success of the AST relies on supervised pretraining that requires a large amount of labeled data and a complex training pipeline, thus limiting the practical usage of AST.
Neural Model Reprogramming with Similarity Based Mapping for Low-Resource Spoken Command Recognition
In this study, we propose a novel adversarial reprogramming (AR) approach for low-resource spoken command recognition (SCR), and build an AR-SCR system.
Exploiting Low-Rank Tensor-Train Deep Neural Networks Based on Riemannian Gradient Descent With Illustrations of Speech Processing
This work focuses on designing low complexity hybrid tensor networks by considering trade-offs between the model complexity and practical performance.