Search Results for author: Sung-Lin Yeh

Found 7 papers, 5 papers with code

Revisiting Self-supervised Learning of Speech Representation from a Mutual Information Perspective

no code implementations16 Jan 2024 Alexander H. Liu, Sung-Lin Yeh, James Glass

We use linear probes to estimate the mutual information between the target information and learned representations, showing another insight into the accessibility to the target information from speech representations.

Representation Learning Self-Supervised Learning +2

Learning Dependencies of Discrete Speech Representations with Neural Hidden Markov Models

no code implementations29 Oct 2022 Sung-Lin Yeh, Hao Tang

While discrete latent variable models have had great success in self-supervised learning, most models assume that frames are independent.

Self-Supervised Learning

Conditioning and Sampling in Variational Diffusion Models for Speech Super-Resolution

1 code implementation27 Oct 2022 Chin-Yun Yu, Sung-Lin Yeh, György Fazekas, Hao Tang

Moreover, by coupling the proposed sampling method with an unconditional DM, i. e., a DM with no auxiliary inputs to its noise predictor, we can generalize it to a wide range of SR setups.


Autoregressive Co-Training for Learning Discrete Speech Representations

1 code implementation29 Mar 2022 Sung-Lin Yeh, Hao Tang

While several self-supervised approaches for learning discrete speech representation have been proposed, it is unclear how these seemingly similar approaches relate to each other.


Attractive or Faithful? Popularity-Reinforced Learning for Inspired Headline Generation

1 code implementation6 Feb 2020 Yun-Zhu Song, Hong-Han Shuai, Sung-Lin Yeh, Yi-Lun Wu, Lun-Wei Ku, Wen-Chih Peng

To generate inspired headlines, we propose a novel framework called POpularity-Reinforced Learning for inspired Headline Generation (PORL-HG).

Headline Generation Reinforcement Learning (RL) +1

An Interaction-aware Attention Network for Speech Emotion Recognition in Spoken Dialogs

1 code implementation ICASSP 2019 Sung-Lin Yeh, Yun-Shao Lin, Chi-Chun Lee

In this work, we propose an interaction-aware attention network (IAAN) that incorporate contextual information in the learned vocal representation through a novel attention mechanism.

Speech Emotion Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.