no code implementations • 16 Jan 2024 • Alexander H. Liu, Sung-Lin Yeh, James Glass
We use linear probes to estimate the mutual information between the target information and learned representations, showing another insight into the accessibility to the target information from speech representations.
no code implementations • 29 Oct 2022 • Sung-Lin Yeh, Hao Tang
While discrete latent variable models have had great success in self-supervised learning, most models assume that frames are independent.
1 code implementation • 27 Oct 2022 • Chin-Yun Yu, Sung-Lin Yeh, György Fazekas, Hao Tang
Moreover, by coupling the proposed sampling method with an unconditional DM, i. e., a DM with no auxiliary inputs to its noise predictor, we can generalize it to a wide range of SR setups.
1 code implementation • 29 Mar 2022 • Sung-Lin Yeh, Hao Tang
While several self-supervised approaches for learning discrete speech representation have been proposed, it is unclear how these seemingly similar approaches relate to each other.
4 code implementations • 8 Jun 2021 • Mirco Ravanelli, Titouan Parcollet, Peter Plantinga, Aku Rouhe, Samuele Cornell, Loren Lugosch, Cem Subakan, Nauman Dawalatabad, Abdelwahab Heba, Jianyuan Zhong, Ju-chieh Chou, Sung-Lin Yeh, Szu-Wei Fu, Chien-Feng Liao, Elena Rastorgueva, François Grondin, William Aris, Hwidong Na, Yan Gao, Renato de Mori, Yoshua Bengio
SpeechBrain is an open-source and all-in-one speech toolkit.
1 code implementation • 6 Feb 2020 • Yun-Zhu Song, Hong-Han Shuai, Sung-Lin Yeh, Yi-Lun Wu, Lun-Wei Ku, Wen-Chih Peng
To generate inspired headlines, we propose a novel framework called POpularity-Reinforced Learning for inspired Headline Generation (PORL-HG).
1 code implementation • ICASSP 2019 • Sung-Lin Yeh, Yun-Shao Lin, Chi-Chun Lee
In this work, we propose an interaction-aware attention network (IAAN) that incorporate contextual information in the learned vocal representation through a novel attention mechanism.