Search Results for author: Sunghwan Ahn

Found 3 papers, 0 papers with code

Inter-KD: Intermediate Knowledge Distillation for CTC-Based Automatic Speech Recognition

no code implementations28 Nov 2022 Ji Won Yoon, Beom Jun Woo, Sunghwan Ahn, Hyeonseung Lee, Nam Soo Kim

Recently, the advance in deep learning has brought a considerable improvement in the end-to-end speech recognition field, simplifying the traditional pipeline while producing promising results.

Automatic Speech Recognition Data Augmentation +3

Oracle Teacher: Leveraging Target Information for Better Knowledge Distillation of CTC Models

no code implementations5 Nov 2021 Ji Won Yoon, Hyung Yong Kim, Hyeonseung Lee, Sunghwan Ahn, Nam Soo Kim

Extending this supervised scheme further, we introduce a new type of teacher model for connectionist temporal classification (CTC)-based sequence models, namely Oracle Teacher, that leverages both the source inputs and the output labels as the teacher model's input.

Knowledge Distillation Machine Translation +5

Cannot find the paper you are looking for? You can Submit a new open access paper.