Search Results for author: Hyounguk Shon

Found 4 papers, 1 papers with code

UniCLIP: Unified Framework for Contrastive Language-Image Pre-training

no code implementations27 Sep 2022 Janghyeon Lee, Jongsuk Kim, Hyounguk Shon, Bumsoo Kim, Seung Hwan Kim, Honglak Lee, Junmo Kim

Pre-training vision-language models with contrastive objectives has shown promising results that are both scalable to large uncurated datasets and transferable to many downstream applications.

DLCFT: Deep Linear Continual Fine-Tuning for General Incremental Learning

no code implementations17 Aug 2022 Hyounguk Shon, Janghyeon Lee, Seung Hwan Kim, Junmo Kim

We show that this allows us to design a linear model where quadratic parameter regularization method is placed as the optimal continual learning policy, and at the same time enjoying the high performance of neural networks.

class-incremental learning Image Classification +1

Joint Negative and Positive Learning for Noisy Labels

no code implementations CVPR 2021 Youngdong Kim, Juseung Yun, Hyounguk Shon, Junmo Kim

Based on the fact that directly providing the label to the data (Positive Learning; PL) has a risk of allowing CNNs to memorize the contaminated labels for the case of noisy data, the indirect learning approach that uses complementary labels (Negative Learning for Noisy Labels; NLNL) has proven to be highly effective in preventing overfitting to noisy data as it reduces the risk of providing faulty target.

An empirical study of a pruning mechanism

1 code implementation1 Jan 2021 Minju Jung, Hyounguk Shon, Eojindl Yi, SungHyun Baek, Junmo Kim

For the pruning and retraining phase, whether the pruned-and-retrained network benefits from the pretrained network indded is examined.

Association Network Pruning

Cannot find the paper you are looking for? You can Submit a new open access paper.