no code implementations • 19 Oct 2024 • Seulbi Lee, Jihyo Kim, Sangheum Hwang
With the recent emergence of foundation models trained on internet-scale data and demonstrating remarkable generalization capabilities, such foundation models have become more widely adopted, leading to an expanding range of application domains.
1 code implementation • 25 Feb 2024 • Seungwon Seo, Suho Lee, Sangheum Hwang
By doing so, queries and channel-mixing multi-layer perceptron layers of a target model are fine-tuned to target tasks to learn how to effectively exploit rich representations of pretrained models.
no code implementations • 5 Jan 2024 • SeokHyun Seo, Jinwoo Hong, JungWoo Chae, Kyungyul Kim, Sangheum Hwang
Through experimental analysis using attention maps in ViT, we observe that the rich representations deteriorate when trained on a small dataset.
no code implementations • 7 Apr 2023 • Jae-Hun Lee, Doyoung Yoon, ByeongMoon Ji, Kyungyul Kim, Sangheum Hwang
Linear probing (LP) (and $k$-NN) on the upstream dataset with labels (e. g., ImageNet) and transfer learning (TL) to various downstream datasets are commonly employed to evaluate the quality of visual representations learned via self-supervised learning (SSL).
1 code implementation • 3 Apr 2023 • Suho Lee, Seungwon Seo, Jihyo Kim, Yejin Lee, Sangheum Hwang
These limitations include a lack of principled ways to determine optimal hyperparameters and performance degradation when the unlabeled target data fail to meet certain requirements such as a closed-set and identical label distribution to the source data.
Source-Free Domain Adaptation Unsupervised Domain Adaptation
no code implementations • 25 Mar 2023 • Jihyo Kim, JEONGHYEON KIM, Sangheum Hwang
Active learning aims to identify the most informative data from an unlabeled data pool that enables a model to reach the desired accuracy rapidly.
1 code implementation • 1 Dec 2021 • Jihyo Kim, Jiin Koo, Sangheum Hwang
Therefore, we introduce the unknown detection task, an integration of previous individual tasks, for a rigorous examination of the detection capability of deep neural networks on a wide spectrum of unknown samples.
1 code implementation • 2 Nov 2021 • Jeongeun Park, Seungyoun Shin, Sangheum Hwang, Sungjoon Choi
Robust learning methods aim to learn a clean target distribution from noisy and corrupted training data where a specific corruption pattern is often assumed a priori.
1 code implementation • ICML 2020 • Jooyoung Moon, Jihyo Kim, Younghak Shin, Sangheum Hwang
Despite the power of deep neural networks for a wide range of tasks, an overconfident prediction issue has limited their practical use in many safety-critical applications.
1 code implementation • ICCV 2021 • Kyungyul Kim, ByeongMoon Ji, Doyoung Yoon, Sangheum Hwang
Hence, it can be interpreted within a framework of knowledge distillation as a student becomes a teacher itself.
Ranked #1 on Multimodal Machine Translation on Multi30K (BLUE (DE-EN) metric)
no code implementations • 22 Jul 2018 • Mitko Veta, Yujing J. Heng, Nikolas Stathonikos, Babak Ehteshami Bejnordi, Francisco Beca, Thomas Wollmann, Karl Rohr, Manan A. Shah, Dayong Wang, Mikael Rousson, Martin Hedlund, David Tellez, Francesco Ciompi, Erwan Zerhouni, David Lanyi, Matheus Viana, Vassili Kovalev, Vitali Liauchuk, Hady Ahmady Phoulady, Talha Qaiser, Simon Graham, Nasir Rajpoot, Erik Sjöblom, Jesper Molin, Kyunghyun Paeng, Sangheum Hwang, Sunggyun Park, Zhipeng Jia, Eric I-Chao Chang, Yan Xu, Andrew H. Beck, Paul J. van Diest, Josien P. W. Pluim
The best performing automatic method for the first task achieved a quadratic-weighted Cohen's kappa score of $\kappa$ = 0. 567, 95% CI [0. 464, 0. 671] between the predicted scores and the ground truth.
2 code implementations • 2 Aug 2017 • Sangheum Hwang, Sunggyun Park
We introduce an accurate lung segmentation model for chest radiographs based on deep convolutional neural networks.
1 code implementation • 21 Dec 2016 • Kyunghyun Paeng, Sangheum Hwang, Sunggyun Park, Minsoo Kim
We present a unified framework to predict tumor proliferation scores from breast histopathology whole slide images.
no code implementations • 4 Nov 2016 • Hyo-Eun Kim, Sangheum Hwang, Kyunghyun Cho
From the base model, we introduce a semantic noise modeling method which enables class-conditional perturbation on latent space to enhance the representational power of learned latent feature.
no code implementations • 16 Feb 2016 • Hyo-Eun Kim, Sangheum Hwang
The unpooling-deconvolution combination helps to eliminate less discriminative features in a feature extraction stage, since output features of the deconvolution layer are reconstructed from the most discriminative unpooled features instead of the raw one.
no code implementations • 4 Feb 2016 • Sangheum Hwang, Hyo-Eun Kim
With the help of transfer learning which adopts weight parameters of a pre-trained network, the weakly supervised learning framework for object localization performs well because the pre-trained network already has well-trained class-specific features.