Search Results for author: Sangheum Hwang

Found 15 papers, 8 papers with code

StochCA: A Novel Approach for Exploiting Pretrained Models with Cross-Attention

1 code implementation25 Feb 2024 Seungwon Seo, Suho Lee, Sangheum Hwang

By doing so, queries and channel-mixing multi-layer perceptron layers of a target model are fine-tuned to target tasks to learn how to effectively exploit rich representations of pretrained models.

Domain Generalization Transfer Learning

GTA: Guided Transfer of Spatial Attention from Object-Centric Representations

no code implementations5 Jan 2024 SeokHyun Seo, Jinwoo Hong, JungWoo Chae, Kyungyul Kim, Sangheum Hwang

Through experimental analysis using attention maps in ViT, we observe that the rich representations deteriorate when trained on a small dataset.

Inductive Bias Object Localization +1

Rethinking Evaluation Protocols of Visual Representations Learned via Self-supervised Learning

no code implementations7 Apr 2023 Jae-Hun Lee, Doyoung Yoon, ByeongMoon Ji, Kyungyul Kim, Sangheum Hwang

Linear probing (LP) (and $k$-NN) on the upstream dataset with labels (e. g., ImageNet) and transfer learning (TL) to various downstream datasets are commonly employed to evaluate the quality of visual representations learned via self-supervised learning (SSL).

Self-Supervised Learning Transfer Learning

Few-shot Fine-tuning is All You Need for Source-free Domain Adaptation

1 code implementation3 Apr 2023 Suho Lee, Seungwon Seo, Jihyo Kim, Yejin Lee, Sangheum Hwang

These limitations include a lack of principled ways to determine optimal hyperparameters and performance degradation when the unlabeled target data fail to meet certain requirements such as a closed-set and identical label distribution to the source data.

Source-Free Domain Adaptation Unsupervised Domain Adaptation

Deep Active Learning with Contrastive Learning Under Realistic Data Pool Assumptions

no code implementations25 Mar 2023 Jihyo Kim, JEONGHYEON KIM, Sangheum Hwang

Active learning aims to identify the most informative data from an unlabeled data pool that enables a model to reach the desired accuracy rapidly.

Active Learning Contrastive Learning

A Unified Benchmark for the Unknown Detection Capability of Deep Neural Networks

1 code implementation1 Dec 2021 Jihyo Kim, Jiin Koo, Sangheum Hwang

Therefore, we introduce the unknown detection task, an integration of previous individual tasks, for a rigorous examination of the detection capability of deep neural networks on a wide spectrum of unknown samples.

Open Set Learning Out-of-Distribution Detection

Elucidating Robust Learning with Uncertainty-Aware Corruption Pattern Estimation

1 code implementation2 Nov 2021 Jeongeun Park, Seungyoun Shin, Sangheum Hwang, Sungjoon Choi

Robust learning methods aim to learn a clean target distribution from noisy and corrupted training data where a specific corruption pattern is often assumed a priori.

Confidence-Aware Learning for Deep Neural Networks

1 code implementation ICML 2020 Jooyoung Moon, Jihyo Kim, Younghak Shin, Sangheum Hwang

Despite the power of deep neural networks for a wide range of tasks, an overconfident prediction issue has limited their practical use in many safety-critical applications.

Active Learning Out-of-Distribution Detection

Accurate Lung Segmentation via Network-Wise Training of Convolutional Networks

2 code implementations2 Aug 2017 Sangheum Hwang, Sunggyun Park

We introduce an accurate lung segmentation model for chest radiographs based on deep convolutional neural networks.

Segmentation

A Unified Framework for Tumor Proliferation Score Prediction in Breast Histopathology

1 code implementation21 Dec 2016 Kyunghyun Paeng, Sangheum Hwang, Sunggyun Park, Minsoo Kim

We present a unified framework to predict tumor proliferation scores from breast histopathology whole slide images.

Mitosis Detection whole slide images

Semantic Noise Modeling for Better Representation Learning

no code implementations4 Nov 2016 Hyo-Eun Kim, Sangheum Hwang, Kyunghyun Cho

From the base model, we introduce a semantic noise modeling method which enables class-conditional perturbation on latent space to enhance the representational power of learned latent feature.

Representation Learning

Deconvolutional Feature Stacking for Weakly-Supervised Semantic Segmentation

no code implementations16 Feb 2016 Hyo-Eun Kim, Sangheum Hwang

The unpooling-deconvolution combination helps to eliminate less discriminative features in a feature extraction stage, since output features of the deconvolution layer are reconstructed from the most discriminative unpooled features instead of the raw one.

Lesion Segmentation Segmentation +2

Self-Transfer Learning for Fully Weakly Supervised Object Localization

no code implementations4 Feb 2016 Sangheum Hwang, Hyo-Eun Kim

With the help of transfer learning which adopts weight parameters of a pre-trained network, the weakly supervised learning framework for object localization performs well because the pre-trained network already has well-trained class-specific features.

Object Transfer Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.