Search Results for author: Hyeonseung Lee

Found 7 papers, 2 papers with code

EM-Network: Oracle Guided Self-distillation for Sequence Learning

no code implementations14 Jun 2023 Ji Won Yoon, Sunghwan Ahn, Hyeonseung Lee, Minchan Kim, Seok Min Kim, Nam Soo Kim

We introduce EM-Network, a novel self-distillation approach that effectively leverages target information for supervised sequence-to-sequence (seq2seq) learning.

Machine Translation speech-recognition +1

Inter-KD: Intermediate Knowledge Distillation for CTC-Based Automatic Speech Recognition

no code implementations28 Nov 2022 Ji Won Yoon, Beom Jun Woo, Sunghwan Ahn, Hyeonseung Lee, Nam Soo Kim

Recently, the advance in deep learning has brought a considerable improvement in the end-to-end speech recognition field, simplifying the traditional pipeline while producing promising results.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

Oracle Teacher: Leveraging Target Information for Better Knowledge Distillation of CTC Models

no code implementations5 Nov 2021 Ji Won Yoon, Hyung Yong Kim, Hyeonseung Lee, Sunghwan Ahn, Nam Soo Kim

Extending this supervised scheme further, we introduce a new type of teacher model for connectionist temporal classification (CTC)-based sequence models, namely Oracle Teacher, that leverages both the source inputs and the output labels as the teacher model's input.

Knowledge Distillation Machine Translation +5

Continuous Monitoring of Blood Pressure with Evidential Regression

no code implementations6 Feb 2021 Hyeongju Kim, Woo Hyun Kang, Hyeonseung Lee, Nam Soo Kim

Photoplethysmogram (PPG) signal-based blood pressure (BP) estimation is a promising candidate for modern BP measurements, as PPG signals can be easily obtained from wearable devices in a non-invasive manner, allowing quick BP measurement.

regression

TutorNet: Towards Flexible Knowledge Distillation for End-to-End Speech Recognition

no code implementations3 Aug 2020 Ji Won Yoon, Hyeonseung Lee, Hyung Yong Kim, Won Ik Cho, Nam Soo Kim

To reduce this computational burden, knowledge distillation (KD), which is a popular model compression method, has been used to transfer knowledge from a deep and complex model (teacher) to a shallower and simpler model (student).

Knowledge Distillation Model Compression +3

WaveNODE: A Continuous Normalizing Flow for Speech Synthesis

1 code implementation8 Jun 2020 Hyeongju Kim, Hyeonseung Lee, Woo Hyun Kang, Sung Jun Cheon, Byoung Jin Choi, Nam Soo Kim

In recent years, various flow-based generative models have been proposed to generate high-fidelity waveforms in real-time.

Speech Synthesis

Cannot find the paper you are looking for? You can Submit a new open access paper.