1 code implementation • 17 Feb 2020 • Janghyeon Lee, Donggyu Joo, Hyeong Gwon Hong, Junmo Kim
We propose a novel continual learning method called Residual Continual Learning (ResCL).
no code implementations • CVPR 2020 • Janghyeon Lee, Hyeong Gwon Hong, Donggyu Joo, Junmo Kim
We propose a quadratic penalty method for continual learning of neural networks that contain batch normalization (BN) layers.
1 code implementation • 23 Apr 2021 • Beomyoung Kim, Janghyeon Lee, Sihaeng Lee, Doyeon Kim, Junmo Kim
We present a novel approach for oriented object detection, named TricubeNet, which localizes oriented objects using visual cues ($i. e.,$ heatmap) instead of oriented box offsets regression.
no code implementations • 17 Aug 2022 • Hyounguk Shon, Janghyeon Lee, Seung Hwan Kim, Junmo Kim
We show that this allows us to design a linear model where quadratic parameter regularization method is placed as the optimal continual learning policy, and at the same time enjoying the high performance of neural networks.
no code implementations • 27 Sep 2022 • Janghyeon Lee, Jongsuk Kim, Hyounguk Shon, Bumsoo Kim, Seung Hwan Kim, Honglak Lee, Junmo Kim
Pre-training vision-language models with contrastive objectives has shown promising results that are both scalable to large uncurated datasets and transferable to many downstream applications.
no code implementations • ICCV 2023 • Seunghee Koh, Hyounguk Shon, Janghyeon Lee, Hyeong Gwon Hong, Junmo Kim
Whether the model successfully unlearns the source task is measured by piggyback learning accuracy (PL accuracy).