Search Results for author: Ji Won Yoon

Found 12 papers, 1 papers with code

EM-Network: Oracle Guided Self-distillation for Sequence Learning

no code implementations14 Jun 2023 Ji Won Yoon, Sunghwan Ahn, Hyeonseung Lee, Minchan Kim, Seok Min Kim, Nam Soo Kim

We introduce EM-Network, a novel self-distillation approach that effectively leverages target information for supervised sequence-to-sequence (seq2seq) learning.

Decoder Machine Translation +2

Development of deep biological ages aware of morbidity and mortality based on unsupervised and semi-supervised deep learning approaches

no code implementations1 Feb 2023 Seong-Eun Moon, Ji Won Yoon, Shinyoung Joo, Yoohyung Kim, Jae Hyun Bae, Seokho Yoon, Haanju Yoo, Young Min Cho

Methods: This paper proposes a novel deep learning model to learn latent representations of biological aging in regard to subjects' morbidity and mortality.

Age Estimation

Inter-KD: Intermediate Knowledge Distillation for CTC-Based Automatic Speech Recognition

no code implementations28 Nov 2022 Ji Won Yoon, Beom Jun Woo, Sunghwan Ahn, Hyeonseung Lee, Nam Soo Kim

Recently, the advance in deep learning has brought a considerable improvement in the end-to-end speech recognition field, simplifying the traditional pipeline while producing promising results.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

HuBERT-EE: Early Exiting HuBERT for Efficient Speech Recognition

no code implementations13 Apr 2022 Ji Won Yoon, Beom Jun Woo, Nam Soo Kim

Pre-training with self-supervised models, such as Hidden-unit BERT (HuBERT) and wav2vec 2. 0, has brought significant improvements in automatic speech recognition (ASR).

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Oracle Teacher: Leveraging Target Information for Better Knowledge Distillation of CTC Models

no code implementations5 Nov 2021 Ji Won Yoon, Hyung Yong Kim, Hyeonseung Lee, Sunghwan Ahn, Nam Soo Kim

Extending this supervised scheme further, we introduce a new type of teacher model for connectionist temporal classification (CTC)-based sequence models, namely Oracle Teacher, that leverages both the source inputs and the output labels as the teacher model's input.

Knowledge Distillation Machine Translation +5

TutorNet: Towards Flexible Knowledge Distillation for End-to-End Speech Recognition

no code implementations3 Aug 2020 Ji Won Yoon, Hyeonseung Lee, Hyung Yong Kim, Won Ik Cho, Nam Soo Kim

To reduce this computational burden, knowledge distillation (KD), which is a popular model compression method, has been used to transfer knowledge from a deep and complex model (teacher) to a shallower and simpler model (student).

Knowledge Distillation Model Compression +3

Speech to Text Adaptation: Towards an Efficient Cross-Modal Distillation

no code implementations17 May 2020 Won Ik Cho, Dong-Hyun Kwak, Ji Won Yoon, Nam Soo Kim

We transfer the knowledge from a concrete Transformer-based text LM to an SLU module which can face a data shortage, based on recent cross-modal distillation methodologies.

Computational Efficiency speech-recognition +2

Speech Intention Understanding in a Head-final Language: A Disambiguation Utilizing Intonation-dependency

2 code implementations10 Nov 2018 Won Ik Cho, Hyeon Seung Lee, Ji Won Yoon, Seok Min Kim, Nam Soo Kim

This paper suggests a system which identifies the inherent intention of a spoken utterance given its transcript, in some cases using auxiliary acoustic features.


An Efficient Model Selection for Gaussian Mixture Model in a Bayesian Framework

no code implementations3 Jul 2013 Ji Won Yoon

In order to cluster or partition data, we often use Expectation-and-Maximization (EM) or Variational approximation with a Gaussian Mixture Model (GMM), which is a parametric probability density function represented as a weighted sum of $\hat{K}$ Gaussian component densities.

Clustering Model Selection

Statistical Denoising for single molecule fluorescence microscopic images

no code implementations7 Jun 2013 Ji Won Yoon

Single molecule fluorescence microscopy is a powerful technique for uncovering detailed information about biological systems, both in vitro and in vivo.


Efficient Estimation of the number of neighbours in Probabilistic K Nearest Neighbour Classification

no code implementations5 May 2013 Ji Won Yoon, Nial Friel

Probabilistic k-nearest neighbour (PKNN) classification has been introduced to improve the performance of original k-nearest neighbour (KNN) classification algorithm by explicitly modelling uncertainty in the classification of each feature vector.

Classification Decision Making +2

Cannot find the paper you are looking for? You can Submit a new open access paper.