Search Results for author: Kyungmin Lee

Found 10 papers, 1 papers with code

GCISG: Guided Causal Invariant Learning for Improved Syn-to-real Generalization

no code implementations22 Aug 2022 Gilhyun Nam, Gyeongjae Choi, Kyungmin Lee

In sum, we refer to our method as Guided Causal Invariant Syn-to-real Generalization that effectively improves the performance of syn-to-real generalization.

Domain Generalization Image Classification +1

RényiCL: Contrastive Representation Learning with Skew Rényi Divergence

no code implementations12 Aug 2022 Kyungmin Lee, Jinwoo Shin

Here, the choice of data augmentation is sensitive to the quality of learned representations: as harder the data augmentations are applied, the views share more task-relevant information, but also task-irrelevant one that can hinder the generalization capability of representation.

Contrastive Learning Data Augmentation +1

Prototypical Contrastive Predictive Coding

no code implementations ICLR 2022 Kyungmin Lee

Transferring representational knowledge of a model to another is a wide-ranging topic in machine learning.

Contrastive Learning Knowledge Distillation +2

Efficient randomized smoothing by denoising with learned score function

no code implementations1 Jan 2021 Kyungmin Lee, Seyoon Oh

In this work, we present an efficient method for randomized smoothing that does not require any re-training of classifiers.

Image Denoising

Applying GPGPU to Recurrent Neural Network Language Model based Fast Network Search in the Real-Time LVCSR

no code implementations23 Jul 2020 Kyungmin Lee, Chiyoun Park, Ilhwan Kim, Namhoon Kim, Jaewon Lee

Recurrent Neural Network Language Models (RNNLMs) have started to be used in various fields of speech recognition due to their outstanding performance.

speech-recognition Speech Recognition

Attention based on-device streaming speech recognition with large speech corpus

no code implementations2 Jan 2020 Kwangyoun Kim, Kyungmin Lee, Dhananjaya Gowda, Junmo Park, Sungsoo Kim, Sichen Jin, Young-Yoon Lee, Jinsu Yeo, Daehyun Kim, Seokyeong Jung, Jungin Lee, Myoungji Han, Chanwoo Kim

In this paper, we present a new on-device automatic speech recognition (ASR) system based on monotonic chunk-wise attention (MoChA) models trained with large (> 10K hours) corpus.

Automatic Speech Recognition Data Augmentation +2

end-to-end training of a large vocabulary end-to-end speech recognition system

no code implementations22 Dec 2019 Chanwoo Kim, Sungsoo Kim, Kwangyoun Kim, Mehul Kumar, Jiyeon Kim, Kyungmin Lee, Changwoo Han, Abhinav Garg, Eunhyang Kim, Minkyoo Shin, Shatrughan Singh, Larry Heck, Dhananjaya Gowda

Our end-to-end speech recognition system built using this training infrastructure showed a 2. 44 % WER on test-clean of the LibriSpeech test set after applying shallow fusion with a Transformer language model (LM).

Data Augmentation speech-recognition +1

Local Spectroscopies Reveal Percolative Metal in Disordered Mott Insulators

no code implementations29 Jul 2019 Joseph C. Szabo, Kyungmin Lee, Vidya Madhavan, Nandini Trivedi

We elucidate the mechanism by which a Mott insulator transforms into a non-Fermi liquid metal upon increasing disorder at half filling.

Strongly Correlated Electrons Disordered Systems and Neural Networks

Accelerating recurrent neural network language model based online speech recognition system

no code implementations30 Jan 2018 Kyungmin Lee, Chiyoun Park, Namhoon Kim, Jaewon Lee

This paper presents methods to accelerate recurrent neural network based language models (RNNLMs) for online speech recognition systems.

speech-recognition Speech Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.