1 code implementation • 1 Jan 2021 • Minju Jung, Hyounguk Shon, Eojindl Yi, SungHyun Baek, Junmo Kim
For the pruning and retraining phase, whether the pruned-and-retrained network benefits from the pretrained network indded is examined.
no code implementations • CVPR 2021 • Youngdong Kim, Juseung Yun, Hyounguk Shon, Junmo Kim
Based on the fact that directly providing the label to the data (Positive Learning; PL) has a risk of allowing CNNs to memorize the contaminated labels for the case of noisy data, the indirect learning approach that uses complementary labels (Negative Learning for Noisy Labels; NLNL) has proven to be highly effective in preventing overfitting to noisy data as it reduces the risk of providing faulty target.
no code implementations • 17 Aug 2022 • Hyounguk Shon, Janghyeon Lee, Seung Hwan Kim, Junmo Kim
We show that this allows us to design a linear model where quadratic parameter regularization method is placed as the optimal continual learning policy, and at the same time enjoying the high performance of neural networks.
no code implementations • 27 Sep 2022 • Janghyeon Lee, Jongsuk Kim, Hyounguk Shon, Bumsoo Kim, Seung Hwan Kim, Honglak Lee, Junmo Kim
Pre-training vision-language models with contrastive objectives has shown promising results that are both scalable to large uncurated datasets and transferable to many downstream applications.
no code implementations • 9 Jun 2023 • Dong-Jae Lee, Jae Young Lee, Hyounguk Shon, Eojindl Yi, Yeong-Hun Park, Sung-Sik Cho, Junmo Kim
While most lightweight monocular depth estimation methods have been developed using convolution neural networks, the Transformer has been gradually utilized in monocular depth estimation recently.
no code implementations • ICCV 2023 • Seunghee Koh, Hyounguk Shon, Janghyeon Lee, Hyeong Gwon Hong, Junmo Kim
Whether the model successfully unlearns the source task is measured by piggyback learning accuracy (PL accuracy).
no code implementations • 22 Dec 2023 • Chanho Lee, Jinsu Son, Hyounguk Shon, Yunho Jeon, Junmo Kim
Compared to state-of-the-art methods, our proposed method delivers comparable performance on DOTA-v1. 0 and outperforms by 1. 5 mAP on DOTA-v1. 5, all while significantly reducing the model parameters to 16%.