Search Results for author: Inkyu Shin

Found 13 papers, 2 papers with code

Learning Classifiers of Prototypes and Reciprocal Points for Universal Domain Adaptation

no code implementations16 Dec 2022 Sungsu Hur, Inkyu Shin, KwanYong Park, Sanghyun Woo, In So Kweon

To successfully train our framework, we collect the partial, confident target samples that are classified as known or unknown through on our proposed multi-criteria selection.

Universal Domain Adaptation

CD-TTA: Compound Domain Test-time Adaptation for Semantic Segmentation

no code implementations16 Dec 2022 Junha Song, KwanYong Park, Inkyu Shin, Sanghyun Woo, In So Kweon

Test-time adaptation (TTA) has attracted significant attention due to its practical properties which enable the adaptation of a pre-trained model to a new domain with only target dataset during the inference stage.

Denoising Online Clustering +1

MATE: Masked Autoencoders are Online 3D Test-Time Learners

no code implementations21 Nov 2022 M. Jehanzeb Mirza, Inkyu Shin, Wei Lin, Andreas Schriebl, Kunyang Sun, Jaesung Choe, Horst Possegger, Mateusz Kozinski, In So Kweon, Kun-Jin Yoon, Horst Bischof

Like existing TTT methods, which focused on classifying 2D images in the presence of distribution shifts at test-time, MATE also leverages test data for adaptation.

3D Object Classification Point Cloud Classification

Moving from 2D to 3D: volumetric medical image classification for rectal cancer staging

no code implementations13 Sep 2022 Joohyung Lee, Jieun Oh, Inkyu Shin, You-sung Kim, Dae Kyung Sohn, Tae-sung Kim, In So Kweon

In this study, we present a volumetric convolutional neural network to accurately discriminate T2 from T3 stage rectal cancer with rectal MR volumes.

Image Classification Medical Image Classification

UDA-COPE: Unsupervised Domain Adaptation for Category-level Object Pose Estimation

no code implementations CVPR 2022 Taeyeop Lee, Byeong-Uk Lee, Inkyu Shin, Jaesung Choe, Ukcheol Shin, In So Kweon, Kuk-Jin Yoon

Inspired by recent multi-modal UDA techniques, the proposed method exploits a teacher-student self-supervised learning scheme to train a pose estimation network without using target domain pose labels.

 Ranked #1 on 6D Pose Estimation using RGBD on REAL275 (mAP 10, 2cm metric)

6D Pose Estimation using RGBD Self-Supervised Learning +1

LabOR: Labeling Only if Required for Domain Adaptive Semantic Segmentation

no code implementations ICCV 2021 Inkyu Shin, Dong-Jin Kim, Jae Won Cho, Sanghyun Woo, KwanYong Park, In So Kweon

In order to find the uncertain points, we generate an inconsistency mask using the proposed adaptive pixel selector and we label these segment-based regions to achieve near supervised performance with only a small fraction (about 2. 2%) ground truth points, which we call "Segment based Pixel-Labeling (SPL)".

Semantic Segmentation Unsupervised Domain Adaptation

Unsupervised Domain Adaptation for Video Semantic Segmentation

no code implementations23 Jul 2021 Inkyu Shin, KwanYong Park, Sanghyun Woo, In So Kweon

In this work, we present a new video extension of this task, namely Unsupervised Domain Adaptation for Video Semantic Segmentation.

Semantic Segmentation Unsupervised Domain Adaptation +1

Two-phase Pseudo Label Densification for Self-training based Domain Adaptation

no code implementations ECCV 2020 Inkyu Shin, Sanghyun Woo, Fei Pan, Inso Kweon

However, since only the confident predictions are taken as pseudo labels, existing self-training approaches inevitably produce sparse pseudo labels in practice.

Pseudo Label Unsupervised Domain Adaptation

Image-to-Image Translation via Group-wise Deep Whitening-and-Coloring Transformation

2 code implementations CVPR 2019 Wonwoong Cho, Sungha Choi, David Keetae Park, Inkyu Shin, Jaegul Choo

However, applying this approach in image translation is computationally intensive and error-prone due to the expensive time complexity and its non-trivial backpropagation.

Image-to-Image Translation Style Transfer +1

Cannot find the paper you are looking for? You can Submit a new open access paper.