1 code implementation • 14 Dec 2023 • Kangwook Jang, Sungnyun Kim, Hoirin Kim
Albeit great performance of Transformer-based speech selfsupervised learning (SSL) models, their large parameter size and computational cost make them unfavorable to utilize.
1 code implementation • 19 May 2023 • Kangwook Jang, Sungnyun Kim, Se-Young Yun, Hoirin Kim
Transformer-based speech self-supervised learning (SSL) models, such as HuBERT, show surprising performance in various speech processing tasks.
1 code implementation • 1 Jul 2022 • Yeonghyeon Lee, Kangwook Jang, Jahyun Goo, Youngmoon Jung, Hoirin Kim
Our method reduces the model to 23. 8% in size and 35. 9% in inference time compared to HuBERT.