no code implementations • 29 Mar 2024 • Sanghyun Woo, KwanYong Park, Inkyu Shin, Myungchul Kim, In So Kweon
Multi-target multi-camera tracking is a crucial task that involves identifying and tracking individuals over time using video streams from multiple cameras.
1 code implementation • 30 Nov 2023 • Ju He, Qihang Yu, Inkyu Shin, Xueqing Deng, Xiaohui Shen, Alan Yuille, Liang-Chieh Chen
To alleviate the issue, we propose to adapt the trajectory attention for both the dense pixel features and object queries, aiming to improve the short-term and long-term tracking results, respectively.
Ranked #1 on Video Panoptic Segmentation on VIPSeg
no code implementations • 10 Apr 2023 • Inkyu Shin, Dahun Kim, Qihang Yu, Jun Xie, Hong-Seok Kim, Bradley Green, In So Kweon, Kuk-Jin Yoon, Liang-Chieh Chen
The meta architecture of the proposed Video-kMaX consists of two components: within clip segmenter (for clip-level segmentation) and cross-clip associater (for association beyond clips).
no code implementations • CVPR 2023 • Taeyeop Lee, Jonathan Tremblay, Valts Blukis, Bowen Wen, Byeong-Uk Lee, Inkyu Shin, Stan Birchfield, In So Kweon, Kuk-Jin Yoon
Unlike previous unsupervised domain adaptation methods for category-level object pose estimation, our approach processes the test data in a sequential, online manner, and it does not require access to the source domain at runtime.
no code implementations • 17 Mar 2023 • Daehan Kim, Minseok Seo, KwanYong Park, Inkyu Shin, Sanghyun Woo, In-So Kweon, Dong-Geol Choi
In specific, we achieve domain mixup in two-step: cut and paste.
no code implementations • 16 Dec 2022 • Junha Song, KwanYong Park, Inkyu Shin, Sanghyun Woo, Chaoning Zhang, In So Kweon
In addition, to prevent overfitting of the TTA model, we devise novel regularization which modulates the adaptation rates using domain-similarity between the source and the current target domain.
no code implementations • 16 Dec 2022 • Sungsu Hur, Inkyu Shin, KwanYong Park, Sanghyun Woo, In So Kweon
To successfully train our framework, we collect the partial, confident target samples that are classified as known or unknown through on our proposed multi-criteria selection.
1 code implementation • ICCV 2023 • M. Jehanzeb Mirza, Inkyu Shin, Wei Lin, Andreas Schriebl, Kunyang Sun, Jaesung Choe, Horst Possegger, Mateusz Kozinski, In So Kweon, Kun-Jin Yoon, Horst Bischof
Our MATE is the first Test-Time-Training (TTT) method designed for 3D data, which makes deep networks trained for point cloud classification robust to distribution shifts occurring in test data.
1 code implementation • 13 Sep 2022 • Joohyung Lee, Jieun Oh, Inkyu Shin, You-sung Kim, Dae Kyung Sohn, Tae-sung Kim, In So Kweon
In this study, we present a volumetric convolutional neural network to accurately discriminate T2 from T3 stage rectal cancer with rectal MR volumes.
no code implementations • CVPR 2022 • Inkyu Shin, Yi-Hsuan Tsai, Bingbing Zhuang, Samuel Schulter, Buyu Liu, Sparsh Garg, In So Kweon, Kuk-Jin Yoon
In this paper, we propose and explore a new multi-modal extension of test-time adaptation for 3D semantic segmentation.
no code implementations • CVPR 2022 • Taeyeop Lee, Byeong-Uk Lee, Inkyu Shin, Jaesung Choe, Ukcheol Shin, In So Kweon, Kuk-Jin Yoon
Inspired by recent multi-modal UDA techniques, the proposed method exploits a teacher-student self-supervised learning scheme to train a pose estimation network without using target domain pose labels.
Ranked #5 on 6D Pose Estimation using RGBD on REAL275
no code implementations • NeurIPS 2020 • KwanYong Park, Sanghyun Woo, Inkyu Shin, In So Kweon
The scheme first clusters compound target data based on style, discovering multiple latent domains (discover).
no code implementations • ICCV 2021 • Inkyu Shin, Dong-Jin Kim, Jae Won Cho, Sanghyun Woo, KwanYong Park, In So Kweon
In order to find the uncertain points, we generate an inconsistency mask using the proposed adaptive pixel selector and we label these segment-based regions to achieve near supervised performance with only a small fraction (about 2. 2%) ground truth points, which we call "Segment based Pixel-Labeling (SPL)".
no code implementations • 23 Jul 2021 • Inkyu Shin, KwanYong Park, Sanghyun Woo, In So Kweon
In this work, we present a new video extension of this task, namely Unsupervised Domain Adaptation for Video Semantic Segmentation.
no code implementations • 1 Jan 2021 • Junsoo Lee, Hojoon Lee, Inkyu Shin, Jaekyoung Bae, In So Kweon, Jaegul Choo
Learning visual representations using large-scale unlabelled images is a holy grail for most of computer vision tasks.
no code implementations • ECCV 2020 • Inkyu Shin, Sanghyun Woo, Fei Pan, Inso Kweon
However, since only the confident predictions are taken as pseudo labels, existing self-training approaches inevitably produce sparse pseudo labels in practice.
1 code implementation • CVPR 2020 • Fei Pan, Inkyu Shin, Francois Rameau, Seokju Lee, In So Kweon
Finally, to decrease the intra-domain gap, we propose to employ a self-supervised adaptation technique from the easy to the hard split.
Ranked #2 on Domain Adaptation on Synscapes-to-Cityscapes
2 code implementations • CVPR 2019 • Wonwoong Cho, Sungha Choi, David Keetae Park, Inkyu Shin, Jaegul Choo
However, applying this approach in image translation is computationally intensive and error-prone due to the expensive time complexity and its non-trivial backpropagation.