Search Results for author: Hyounguk Shon

Found 7 papers, 1 papers with code

An empirical study of a pruning mechanism

1 code implementation1 Jan 2021 Minju Jung, Hyounguk Shon, Eojindl Yi, SungHyun Baek, Junmo Kim

For the pruning and retraining phase, whether the pruned-and-retrained network benefits from the pretrained network indded is examined.

Network Pruning

Joint Negative and Positive Learning for Noisy Labels

no code implementations CVPR 2021 Youngdong Kim, Juseung Yun, Hyounguk Shon, Junmo Kim

Based on the fact that directly providing the label to the data (Positive Learning; PL) has a risk of allowing CNNs to memorize the contaminated labels for the case of noisy data, the indirect learning approach that uses complementary labels (Negative Learning for Noisy Labels; NLNL) has proven to be highly effective in preventing overfitting to noisy data as it reduces the risk of providing faulty target.

DLCFT: Deep Linear Continual Fine-Tuning for General Incremental Learning

no code implementations17 Aug 2022 Hyounguk Shon, Janghyeon Lee, Seung Hwan Kim, Junmo Kim

We show that this allows us to design a linear model where quadratic parameter regularization method is placed as the optimal continual learning policy, and at the same time enjoying the high performance of neural networks.

Class Incremental Learning Image Classification +1

UniCLIP: Unified Framework for Contrastive Language-Image Pre-training

no code implementations27 Sep 2022 Janghyeon Lee, Jongsuk Kim, Hyounguk Shon, Bumsoo Kim, Seung Hwan Kim, Honglak Lee, Junmo Kim

Pre-training vision-language models with contrastive objectives has shown promising results that are both scalable to large uncurated datasets and transferable to many downstream applications.

Lightweight Monocular Depth Estimation via Token-Sharing Transformer

no code implementations9 Jun 2023 Dong-Jae Lee, Jae Young Lee, Hyounguk Shon, Eojindl Yi, Yeong-Hun Park, Sung-Sik Cho, Junmo Kim

While most lightweight monocular depth estimation methods have been developed using convolution neural networks, the Transformer has been gradually utilized in monocular depth estimation recently.

Depth Prediction Monocular Depth Estimation

FRED: Towards a Full Rotation-Equivariance in Aerial Image Object Detection

no code implementations22 Dec 2023 Chanho Lee, Jinsu Son, Hyounguk Shon, Yunho Jeon, Junmo Kim

Compared to state-of-the-art methods, our proposed method delivers comparable performance on DOTA-v1. 0 and outperforms by 1. 5 mAP on DOTA-v1. 5, all while significantly reducing the model parameters to 16%.

Data Augmentation Object +4

Cannot find the paper you are looking for? You can Submit a new open access paper.