no code implementations • 27 Sep 2022 • Janghyeon Lee, Jongsuk Kim, Hyounguk Shon, Bumsoo Kim, Seung Hwan Kim, Honglak Lee, Junmo Kim
Pre-training vision-language models with contrastive objectives has shown promising results that are both scalable to large uncurated datasets and transferable to many downstream applications.
no code implementations • 17 Aug 2022 • Hyounguk Shon, Janghyeon Lee, Seung Hwan Kim, Junmo Kim
We show that this allows us to design a linear model where quadratic parameter regularization method is placed as the optimal continual learning policy, and at the same time enjoying the high performance of neural networks.
no code implementations • CVPR 2021 • Youngdong Kim, Juseung Yun, Hyounguk Shon, Junmo Kim
Based on the fact that directly providing the label to the data (Positive Learning; PL) has a risk of allowing CNNs to memorize the contaminated labels for the case of noisy data, the indirect learning approach that uses complementary labels (Negative Learning for Noisy Labels; NLNL) has proven to be highly effective in preventing overfitting to noisy data as it reduces the risk of providing faulty target.
1 code implementation • 1 Jan 2021 • Minju Jung, Hyounguk Shon, Eojindl Yi, SungHyun Baek, Junmo Kim
For the pruning and retraining phase, whether the pruned-and-retrained network benefits from the pretrained network indded is examined.