1 code implementation • 21 Sep 2024 • EungGu Kang, Byeonghun Lee, Sunghoon Im, Kyong Hwan Jin
However, the existing MFSR suffers from misalignments between the reference and source frames due to the limitations of DCN, such as small receptive fields and the predefined number of kernels.
Ranked #1 on Burst Image Super-Resolution on BurstSR
Burst Image Super-Resolution Multi-Frame Super-Resolution +1
no code implementations • 16 Aug 2024 • Jihun Park, Jongmin Gim, Kyoungmin Lee, Seunghun Lee, Sunghoon Im
It ensures a seamless and harmonious style transfer across object regions.
1 code implementation • 10 Jul 2024 • Jaeyeul Kim, Jungwan Woo, Ukcheol Shin, Jean Oh, Sunghoon Im
In addition, Flow4D further improves performance by using five frames to take advantage of richer temporal information.
1 code implementation • 3 Jul 2024 • Seunghun Lee, Jiwan Seo, Kiljoon Han, Minwoo Choi, Sunghoon Im
In this paper, we introduce the Context-Aware Video Instance Segmentation (CAVIS), a novel framework designed to enhance instance association by integrating contextual information adjacent to each object.
Ranked #1 on Video Instance Segmentation on OVIS validation (using extra training data)
1 code implementation • CVPR 2024 • Woo Kyoung Han, Sunghoon Im, Jaedeok Kim, Kyong Hwan Jin
We propose a practical approach to JPEG image decoding, utilizing a local implicit neural representation with continuous cosine formulation.
no code implementations • 6 Mar 2024 • Wonhyeok Choi, Mingyu Shin, Hyukzae Lee, Jaehoon Cho, Jaehyeon Park, Sunghoon Im
Real-time processing is crucial in autonomous driving systems due to the imperative of instantaneous decision-making and rapid response.
no code implementations • NeurIPS 2023 • Wonhyeok Choi, Mingyu Shin, Sunghoon Im
Moreover, we introduce an auxiliary head for object-wise depth estimation, which enhances depth quality while maintaining the inference time.
no code implementations • 19 Dec 2023 • Jaeyeul Kim, Jungwan Woo, Jeonghoon Kim, Sunghoon Im
The DDFE module is meticulously designed to extract density-specific features within a single source domain, facilitating the recognition of objects sharing similar density characteristics across different LiDAR sensors.
no code implementations • 9 Oct 2023 • Sungho Moon, Jinwoo Bae, Sunghoon Im
In this paper, we conduct extensive experiments to analyze the factors that cause performance degradation.
1 code implementation • 4 Sep 2023 • Minsu Kim, Jaewon Lee, Byeonghun Lee, Sunghoon Im, Kyong Hwan Jin
Existing frameworks for image stitching often provide visually reasonable stitchings.
no code implementations • CVPR 2023 • Wonhyeok Choi, Sunghoon Im
In this paper, we present a new MTL framework that searches for structures optimized for multiple tasks with diverse graph topologies and shares features among tasks.
no code implementations • 15 Feb 2023 • Hojin Kim, Seunghun Lee, Sunghoon Im
In this paper, we present offline-to-online knowledge distillation (OOKD) for video instance segmentation (VIS), which transfers a wealth of video knowledge from an offline model to an online model for consistent prediction.
1 code implementation • 9 Jan 2023 • Jinwoo Bae, Kyumin Hwang, Sunghoon Im
In this paper, we deeply investigate the various backbone networks (e. g. CNN and Transformer models) toward the generalization of monocular depth estimation.
1 code implementation • 23 May 2022 • Jinwoo Bae, Sungho Moon, Sunghoon Im
In this paper, we investigate the backbone networks (e. g. CNNs, Transformers, and CNN-Transformer hybrid models) toward the generalization of monocular depth estimation.
no code implementations • CVPR 2022 • Seunghun Lee, Wonhyeok Choi, Changjae Kim, Minwoo Choi, Sunghoon Im
In this paper, we present a direct adaptation strategy (ADAS), which aims to directly adapt a single model to multiple target domains in a semantic segmentation task without pretrained domain-specific models.
Ranked #3 on Domain Adaptation on GTAV to Cityscapes+Mapillary
no code implementations • 25 Nov 2021 • Minjun Kang, Jaesung Choe, Hyowon Ha, Hae-Gon Jeon, Sunghoon Im, In So Kweon, Kuk-Jin Yoon
Many mobile manufacturers recently have adopted Dual-Pixel (DP) sensors in their flagship models for faster auto-focus and aesthetic image captures.
1 code implementation • 1 Nov 2021 • Dahoon Park, Kon-Woo Kwon, Sunghoon Im, Jaeha Kung
Many prior works on adversarial weight attack require not only the weight parameters, but also the training or test dataset in searching vulnerable bits to be attacked.
no code implementations • ICCV 2021 • Jaesung Choe, Sunghoon Im, Francois Rameau, Minjun Kang, In So Kweon
To reconstruct a 3D scene from a set of calibrated views, traditional multi-view stereo techniques rely on two distinct stages: local depth maps computation and global depth maps fusion.
1 code implementation • CVPR 2021 • Seunghun Lee, Sunghyun Cho, Sunghoon Im
Our model encodes individual representations of content (scene structure) and style (artistic appearance) from both source and target images.
Ranked #1 on Domain Adaptation on MNIST-to-MNIST-M
1 code implementation • 4 Feb 2021 • Seokju Lee, Sunghoon Im, Stephen Lin, In So Kweon
We present an end-to-end joint training framework that explicitly models 6-DoF motion of multiple dynamic objects, ego-motion and depth in a monocular camera setup without supervision.
Ranked #3 on Monocular Depth Estimation on Cityscapes
1 code implementation • 19 Dec 2019 • Seokju Lee, Sunghoon Im, Stephen Lin, In So Kweon
We present an end-to-end joint training framework that explicitly models 6-DoF motion of multiple dynamic objects, ego-motion and depth in a monocular camera setup without supervision.
no code implementations • 16 Sep 2019 • Seokju Lee, Sunghoon Im, Stephen Lin, In So Kweon
Based on rigid projective geometry, the estimated stereo depth is used to guide the camera motion estimation, and the depth and camera motion are used to guide the residual flow estimation.
1 code implementation • ICLR 2019 • Sunghoon Im, Hae-Gon Jeon, Stephen Lin, In So Kweon
The cost volume is constructed using a differentiable warping process that allows for end-to-end training of the network.
no code implementations • CVPR 2018 • Sunghoon Im, Hae-Gon Jeon, In So Kweon
As demand for advanced photographic applications on hand-held devices grows, these electronics require the capture of high quality depth.
no code implementations • CVPR 2017 • Jaeheung Surh, Hae-Gon Jeon, Yunwon Park, Sunghoon Im, Hyowon Ha, In So Kweon
With the result from the FM, the role of a DfF pipeline is to determine and recalculate unreliable measurements while enhancing those that are reliable.
1 code implementation • CVPR 2016 • Hyowon Ha, Sunghoon Im, Jaesik Park, Hae-Gon Jeon, In So Kweon
We propose a novel approach that generates a high-quality depth map from a set of images captured with a small viewpoint variation, namely small motion clip.
no code implementations • CVPR 2016 • Hae-Gon Jeon, Joon-Young Lee, Sunghoon Im, Hyowon Ha, In So Kweon
Consumer devices with stereo cameras have become popular because of their low-cost depth sensing capability.
no code implementations • ICCV 2015 • Sunghoon Im, Hyowon Ha, Gyeongmin Choe, Hae-Gon Jeon, Kyungdon Joo, In So Kweon
To address these problems, we introduce a novel 3D reconstruction method from narrow-baseline image sequences that effectively handles the effects of a rolling shutter that occur from most of commercial digital cameras.