Search Results for author: Tzu-Quan Lin

Found 3 papers, 2 papers with code

Compressing Transformer-based self-supervised models for speech processing

1 code implementation17 Nov 2022 Tzu-Quan Lin, Tsung-Huan Yang, Chun-Yao Chang, Kuang-Ming Chen, Tzu-hsun Feng, Hung-Yi Lee, Hao Tang

Despite the success of Transformers in self- supervised learning with applications to various downstream tasks, the computational cost of training and inference remains a major challenge for applying these models to a wide spectrum of devices.

Knowledge Distillation Model Compression +1

MelHuBERT: A simplified HuBERT on Mel spectrograms

1 code implementation17 Nov 2022 Tzu-Quan Lin, Hung-Yi Lee, Hao Tang

Self-supervised models have had great success in learning speech representations that can generalize to various downstream tasks.

Automatic Speech Recognition Self-Supervised Learning +3

Cannot find the paper you are looking for? You can Submit a new open access paper.