1 code implementation • 21 Oct 2024 • Hyun-Kurl Jang, Jihun Kim, Hyeokjun Kweon, Kuk-Jin Yoon
Semantic Scene Completion (SSC) aims to perform geometric completion and semantic segmentation simultaneously.
Ranked #1 on
3D Semantic Scene Completion
on SemanticKITTI
1 code implementation • 1 Oct 2024 • Youngho Yoon, Hyun-Kurl Jang, Kuk-Jin Yoon
The construction of radiance fields on-the-fly in G-NeRF simplifies the NVS process, making it well-suited for real-world applications.
1 code implementation • 27 Aug 2024 • Taewoo Kim, Jaeseok Jeong, Hoonhee Cho, Yuhwan Jeong, Kuk-Jin Yoon
In low-light conditions, capturing videos with frame-based cameras often requires long exposure times, resulting in motion blur and reduced visibility.
1 code implementation • 27 Aug 2024 • Taewoo Kim, Hoonhee Cho, Kuk-Jin Yoon
To address this limitation, we aim to solve the video deblurring task by leveraging an event camera with micro-second temporal resolution.
no code implementations • 30 Jul 2024 • Hunmin Yang, Jongoh Jeong, Kuk-Jin Yoon
Recent vision-language foundation models, such as CLIP, have demonstrated superior capabilities in learning representations that can be transferable across diverse range of downstream tasks and domains.
no code implementations • 30 Jul 2024 • Hunmin Yang, Jongoh Jeong, Kuk-Jin Yoon
Deep neural networks are known to be vulnerable to security risks due to the inherent transferable nature of adversarial examples.
1 code implementation • 15 Jul 2024 • Yuhwan Jeong, Hoonhee Cho, Kuk-Jin Yoon
To overcome the limitation, we aim to alleviate data imbalance by translating annotated day data into night events.
1 code implementation • 15 Jul 2024 • Hoonhee Cho, Jae-Young Kang, Kuk-Jin Yoon
To fully utilize the temporally dense and continuous nature of event cameras, we propose a novel temporal event stereo, a framework that continuously uses information from previous time steps.
1 code implementation • 15 Jul 2024 • Hoonhee Cho, Sung-Hoon Yoon, Hyeokjun Kweon, Kuk-Jin Yoon
Event cameras excel in capturing high-contrast scenes and dynamic objects, offering a significant advantage over traditional frame-based cameras.
no code implementations • CVPR 2024 • Wooseong Jeong, Kuk-Jin Yoon
The goal of multi-task learning is to learn diverse tasks within a single unified network.
no code implementations • 4 Jun 2024 • Inkyu Shin, Qihang Yu, Xiaohui Shen, In So Kweon, Kuk-Jin Yoon, Liang-Chieh Chen
In the second stage, we leverage the reconstruction ability developed in the first stage to impose the temporal constraints on the video diffusion model.
1 code implementation • CVPR 2024 • Jaewoo Jeong, Daehee Park, Kuk-Jin Yoon
Our model effectively handles the multi-modality of human motion and the complexity of long-term multi-agent interactions, improving performance in complex environments.
no code implementations • 29 Mar 2024 • Byeongin Joung, Byeong-Uk Lee, Jaesung Choe, Ukcheol Shin, Minjun Kang, Taeyeop Lee, In So Kweon, Kuk-Jin Yoon
This paper proposes an algorithm for synthesizing novel views under few-shot setup.
1 code implementation • CVPR 2024 • Daehee Park, Jaeseok Jeong, Sung-Hoon Yoon, Jaewoo Jeong, Kuk-Jin Yoon
Our method surpasses the performance of existing state-of-the-art online learning methods in terms of both prediction accuracy and computational efficiency.
1 code implementation • CVPR 2024 • Hyeokjun Kweon, Jihun Kim, Kuk-Jin Yoon
It leverages a foundational image model as an artificial oracle within the active learning context eliminating the need for manual annotation by a human oracle.
1 code implementation • CVPR 2024 • Hoonhee Cho, Taewoo Kim, Yuhwan Jeong, Kuk-Jin Yoon
In this paper we propose a test-time adaptation method for event-based VFI to address the gap between the source and target domains.
1 code implementation • CVPR 2024 • Hyeokjun Kweon, Kuk-Jin Yoon
SSC performs prototype-based contrasting using SAM's automatic segmentation results.
no code implementations • CVPR 2024 • Taewoo Kim, Hoonhee Cho, Kuk-Jin Yoon
Despite notable progress in video deblurring works it is still a challenging problem because of the loss of motion information during the duration of the exposure time.
1 code implementation • CVPR 2024 • Yujeong Chae, Hyeonseong Kim, Kuk-Jin Yoon
Detecting objects in 3D under various (normal and adverse) weather conditions is essential for safe autonomous driving systems.
1 code implementation • CVPR 2024 • Sung-Hoon Yoon, Hoyong Kwon, Hyeonseong Kim, Kuk-Jin Yoon
This work proposes a novel WSSS framework with Class Token Infusion (CTI).
Weakly supervised Semantic Segmentation
Weakly-Supervised Semantic Segmentation
1 code implementation • 26 Dec 2023 • Daehee Park, Jaewoo Jeong, Kuk-Jin Yoon
To address this limitation, we propose a method based on continuous and stochastic representations of Neural Stochastic Differential Equations (NSDE) for alleviating discrepancies due to data acquisition strategy.
1 code implementation • ICCV 2023 • Hoonhee Cho, Hyeonseong Kim, Yujeong Chae, Kuk-Jin Yoon
To this end, we propose a joint formulation of object recognition and image reconstruction in a complementary manner.
1 code implementation • 4 Aug 2023 • Hwan-Soo Choi, Jongoh Jeong, Young Hoo Cho, Kuk-Jin Yoon, Jong-Hwan Kim
Sensor fusion approaches for intelligent self-driving agents remain key to driving scene understanding given visual global contexts acquired from input sensors.
no code implementations • 24 May 2023 • Daehee Park, Hobin Ryu, Yunseo Yang, Jegyeong Cho, Jiwon Kim, Kuk-Jin Yoon
We also model the interaction using a probabilistic distribution, which allows for multiple possible future interactions.
Ranked #4 on
Trajectory Prediction
on nuScenes
no code implementations • 10 Apr 2023 • Inkyu Shin, Dahun Kim, Qihang Yu, Jun Xie, Hong-Seok Kim, Bradley Green, In So Kweon, Kuk-Jin Yoon, Liang-Chieh Chen
The meta architecture of the proposed Video-kMaX consists of two components: within clip segmenter (for clip-level segmentation) and cross-clip associater (for association beyond clips).
no code implementations • CVPR 2023 • Taeyeop Lee, Jonathan Tremblay, Valts Blukis, Bowen Wen, Byeong-Uk Lee, Inkyu Shin, Stan Birchfield, In So Kweon, Kuk-Jin Yoon
Unlike previous unsupervised domain adaptation methods for category-level object pose estimation, our approach processes the test data in a sequential, online manner, and it does not require access to the source domain at runtime.
1 code implementation • CVPR 2023 • Taewoo Kim, Yujeong Chae, Hyun-Kurl Jang, Kuk-Jin Yoon
Video Frame Interpolation (VFI) aims to generate intermediate video frames between consecutive input frames.
no code implementations • CVPR 2023 • Hoonhee Cho, Jegyeong Cho, Kuk-Jin Yoon
To tackle this issue, we propose a novel unsupervised domain Adaptive Dense Event Stereo (ADES), which resolves gaps between the different domains and input modalities.
no code implementations • CVPR 2023 • Youngho Yoon, Kuk-Jin Yoon
We perform multi-view image super-resolution (MVSR) on train-view images during the radiance fields optimization process.
1 code implementation • CVPR 2023 • Hyeonseong Kim, Yoonsu Kang, Changgyoon Oh, Kuk-Jin Yoon
In this paper, we propose a single domain generalization method for LiDAR semantic segmentation (DGLSS) that aims to ensure good performance not only in the source domain but also in the unseen domain by learning only on the source domain.
1 code implementation • CVPR 2023 • Hyeokjun Kweon, Sung-Hoon Yoon, Kuk-Jin Yoon
To bring this idea into WSSS, we simultaneously train two models: a classifier generating CAMs that decompose an image into segments and a reconstructor that measures the inferability between the segments.
Ranked #14 on
Weakly-Supervised Semantic Segmentation
on COCO 2014 val
Weakly supervised Semantic Segmentation
Weakly-Supervised Semantic Segmentation
no code implementations • ICCV 2023 • Hoonhee Cho, Yuhwan Jeong, Taewoo Kim, Kuk-Jin Yoon
Motion deblurring from a blurred image is a challenging computer vision problem because frame-based cameras lose information during the blurring process.
no code implementations • ICCV 2023 • Jihun Kim, Hyeokjun Kweon, Yunseo Yang, Kuk-Jin Yoon
Our main idea is to generate multiple incomplete point clouds of various poses and integrate them into a complete point cloud.
no code implementations • 21 Oct 2022 • Valts Blukis, Taeyeop Lee, Jonathan Tremblay, Bowen Wen, In So Kweon, Kuk-Jin Yoon, Dieter Fox, Stan Birchfield
At test-time, we build the representation from a single RGB input image observing the scene from only one viewpoint.
no code implementations • 18 May 2022 • Kyeongseob Song, Kuk-Jin Yoon
Monocular depth estimation has been extensively explored based on deep learning, yet its accuracy and generalization ability still lag far behind the stereo-based methods.
no code implementations • CVPR 2022 • Inkyu Shin, Yi-Hsuan Tsai, Bingbing Zhuang, Samuel Schulter, Buyu Liu, Sparsh Garg, In So Kweon, Kuk-Jin Yoon
In this paper, we propose and explore a new multi-modal extension of test-time adaptation for 3D semantic segmentation.
no code implementations • 4 Apr 2022 • Pranjay Shyam, Sandeep Singh Sengar, Kuk-Jin Yoon, Kyung-Soo Kim
The limited dynamic range of commercial compact camera sensors results in an inaccurate representation of scenes with varying illumination conditions, adversely affecting image quality and subsequently limiting the performance of underlying image processing algorithms.
no code implementations • 26 Feb 2022 • Pranjay Shyam, Antyanta Bangunharcana, Kuk-Jin Yoon, Kyung-Soo Kim
This framework allows us to achieve a domain generalized semantic segmentation algorithm with consistent performance without prior information of the target domain while relying on a single source.
1 code implementation • CVPR 2022 • Yeongwoo Nam, Mohammad Mostafavi, Kuk-Jin Yoon, Jonghyun Choi
To alleviate the event missing or overriding issue, we propose to learn to concentrate on the dense events to produce a compact event representation with high details for depth estimation.
no code implementations • CVPR 2022 • Pranjay Shyam, Kyung-Soo Kim, Kuk-Jin Yoon
Visual degradations caused by motion blur, raindrop, rain, snow, illumination, and fog deteriorate image quality and, subsequently, the performance of perception algorithms deployed in outdoor conditions.
no code implementations • CVPR 2022 • Youngho Yoon, Inchul Chung, Lin Wang, Kuk-Jin Yoon
In this paper, we propose SphereSR, a novel framework to generate a continuous spherical image representation from an LR 360deg image, with the goal of predicting the RGB values at given spherical coordinates for superresolution with an arbitrary 360deg image projection.
no code implementations • 13 Dec 2021 • Taewoo Kim, Jeongmin Lee, Lin Wang, Kuk-Jin Yoon
To this end, we first derive a new formulation for event-guided motion deblurring by considering the exposure and readout time in the video frame acquisition process.
no code implementations • 13 Dec 2021 • Youngho Yoon, Inchul Chung, Lin Wang, Kuk-Jin Yoon
We then propose a spherical local implicit image function (SLIIF) to predict RGB values at the spherical coordinates.
no code implementations • 12 Dec 2021 • Changgyoon Oh, Wonjune Cho, Daehee Park, Yujeong Chae, Lin Wang, Kuk-Jin Yoon
Providing omnidirectional depth along with RGB information is important for numerous applications, eg, VR/AR.
no code implementations • 12 Dec 2021 • Hyeokjun Kweon, Hyeonseong Kim, Yoonsu Kang, Youngho Yoon, Wooseong Jeong, Kuk-Jin Yoon
In this paper, instead of relying on the homography-based warp, we propose a novel deep image stitching framework exploiting the pixel-wise warp field to handle the large-parallax problem.
no code implementations • 10 Dec 2021 • Sung-Hoon Yoon, Hyeokjun Kweon, Jaeseok Jeong, Hyeonseong Kim, Shinjeong Kim, Kuk-Jin Yoon
In our framework, with the help of the proposed Regional Contrastive Module (RCM) and Multi-scale Attentive Module (MAM), MainNet is trained by self-supervision from the SupportNet.
Weakly supervised Semantic Segmentation
Weakly-Supervised Semantic Segmentation
no code implementations • 25 Nov 2021 • Minjun Kang, Jaesung Choe, Hyowon Ha, Hae-Gon Jeon, Sunghoon Im, In So Kweon, Kuk-Jin Yoon
Many mobile manufacturers recently have adopted Dual-Pixel (DP) sensors in their flagship models for faster auto-focus and aesthetic image captures.
1 code implementation • CVPR 2021 • Lin Wang, Yujeong Chae, Sung-Hoon Yoon, Tae-Kyun Kim, Kuk-Jin Yoon
To enable KD across the unpaired modalities, we first propose a bidirectional modality reconstruction (BMR) module to bridge both modalities and simultaneously exploit them to distill knowledge via the crafted pairs, causing no extra computation in the inference.
Ranked #7 on
Event-based Object Segmentation
on MVSEC-SEG
no code implementations • CVPR 2022 • Taeyeop Lee, Byeong-Uk Lee, Inkyu Shin, Jaesung Choe, Ukcheol Shin, In So Kweon, Kuk-Jin Yoon
Inspired by recent multi-modal UDA techniques, the proposed method exploits a teacher-student self-supervised learning scheme to train a pose estimation network without using target domain pose labels.
Ranked #5 on
6D Pose Estimation using RGBD
on REAL275
1 code implementation • 20 Oct 2021 • Lin Wang, Kuk-Jin Yoon
High dynamic range (HDR) imaging is a technique that allows an extensive dynamic range of exposures, which is important in image processing, computer graphics, and computer vision.
1 code implementation • 28 Sep 2021 • Yujeong Chae, Lin Wang, Kuk-Jin Yoon
Importantly, to find the part having the most similar edge structure of target, we propose to correlate the embedded events at two timestamps to compute the target edge similarity.
1 code implementation • ICCV 2021 • Lin Wang, Yujeong Chae, Kuk-Jin Yoon
Event cameras are novel sensors that perceive the per-pixel intensity changes and output asynchronous event streams with high dynamic range and less motion blur.
Ranked #4 on
Event-based Object Segmentation
on MVSEC-SEG
no code implementations • 10 Mar 2021 • Pranjay Shyam, Sandeep Singh Sengar, Kuk-Jin Yoon, Kyung-Soo Kim
However, as these techniques destroy spatial relationship with neighboring regions, performance can be deteriorated when using them to train algorithms designed for low level vision tasks (low light image enhancement, image dehazing, deblurring, etc.)
no code implementations • 9 Jan 2021 • Pranjay Shyam, Kuk-Jin Yoon, Kyung-Soo Kim
However owing to fixed receptive field of convolutional kernels and non uniform haze distribution, assuring consistency between regions is difficult.
1 code implementation • ICCV 2021 • Hyeokjun Kweon, Sung-Hoon Yoon, Hyeonseong Kim, Daehee Park, Kuk-Jin Yoon
In this paper, we review the potential of the pre-trained classifier which is trained on the raw images.
Ranked #31 on
Weakly-Supervised Semantic Segmentation
on COCO 2014 val
Weakly supervised Semantic Segmentation
Weakly-Supervised Semantic Segmentation
no code implementations • ICCV 2021 • Haoang Li, Kai Chen, Pyojin Kim, Kuk-Jin Yoon, Zhe Liu, Kyungdon Joo, Yun-hui Liu
Based on this map, we can detect all the VPs.
no code implementations • ICCV 2021 • Mohammad Mostafavi, Kuk-Jin Yoon, Jonghyun Choi
Event cameras can report scene movements as an asynchronous stream of data called the events.
2 code implementations • 13 Apr 2020 • Lin Wang, Kuk-Jin Yoon
To achieve faster speeds and to handle the problems caused by the lack of data, knowledge distillation (KD) has been proposed to transfer information learned from one model to another.
1 code implementation • CVPR 2020 • Lin Wang, Tae-Kyun Kim, Kuk-Jin Yoon
While each phase is mainly for one of the three tasks, the networks in earlier phases are fine-tuned by respective loss functions in an end-to-end manner.
no code implementations • 6 Jan 2020 • Lin Wang, Wonjune Cho, Kuk-Jin Yoon
However, most previous works have focused on image classification tasks, and it has never been studied regarding adversarial perturbations on Image-to-image (Im2Im) translation tasks, showing great success in handling paired and/or unpaired mapping problems in the field of autonomous driving and robotics.
1 code implementation • CVPR 2020 • S. Mohammad Mostafavi I., Jonghyun Choi, Kuk-Jin Yoon
An event camera detects per-pixel intensity difference and produces asynchronous event stream with low latency, high dynamic range, and low power consumption.
no code implementations • 14 Apr 2019 • Chang-Ryeol Lee, Ju Hong Yoon, Min-Gyu Park, Kuk-Jin Yoon
The rolling shutter camera has received great attention due to its low cost imaging capability, however, the estimation of relative pose between rolling shutter cameras still remains a difficult problem owing to its line-by-line image capturing characteristics.
2 code implementations • 20 Nov 2018 • Yeonkun Lee, Jaeseok Jeong, Jongseob Yun, Wonjune Cho, Kuk-Jin Yoon
This method minimizes thevariance of the spatial resolving power on the sphere sur-face, and includes new convolution and pooling methodsfor the proposed representation.
no code implementations • CVPR 2019 • S. Mohammad Mostafavi I., Lin Wang, Yo-Sung Ho, Kuk-Jin Yoon
Event cameras have a lot of advantages over traditional cameras, such as low latency, high temporal resolution, and high dynamic range.
no code implementations • 7 Dec 2017 • Han-Mu Park, Kuk-Jin Yoon
We describe each pair of graphs by combining multiple attributes, then jointly match them in a unified framework.
no code implementations • 1 Dec 2017 • Yeong-Jun Cho, Kuk-Jin Yoon
The proposed distance-based topology can be applied adaptively to each person according to its speed and handle diverse transition time of people between non-overlapping cameras.
no code implementations • 1 Dec 2017 • Chang-Ryeol Lee, Kuk-Jin Yoon
Relative pose estimation is a fundamental problem in computer vision and it has been studied for conventional global shutter cameras for decades.
no code implementations • 3 Oct 2017 • Yeong-Jun Cho, Su-A Kim, Jae-Han Park, Kyuewang Lee, Kuk-Jin Yoon
Person re-identification is the task of recognizing or identifying a person across multiple views in multi-camera networks.
no code implementations • ICCV 2017 • Yeong Won Kim, Chang-Ryeol Lee, Dae-Yong Cho, Yong Hoon Kwon, Hyeok-Jae Choi, Kuk-Jin Yoon
Finally, the temporal consistency for image projection is enforced for producing temporally stable normal-view videos.
no code implementations • 17 May 2017 • Yeong-Jun Cho, Kuk-Jin Yoon
Person re-identification is the problem of recognizing people across different images or videos with non-overlapping views.
no code implementations • ICCV 2017 • Jeong-Kyun Lee, Jae-Won Yea, Min-Gyu Park, Kuk-Jin Yoon
In this paper, we propose a novel method to jointly solve scene layout estimation and global registration problems for accurate indoor 3D reconstruction.
no code implementations • 24 Apr 2017 • Chang-Ryeol Lee, Kuk-Jin Yoon
However, the MVO still has trouble in handling the RS distortion when the camera motion changes abruptly (e. g. vibration of mobile cameras causes extremely fast motion instantaneously).
no code implementations • 24 Apr 2017 • Han-Mu Park, Kuk-Jin Yoon
In this work, we propose a novel multi-attributed graph matching algorithm based on the multi-layer graph factorization.
no code implementations • 24 Apr 2017 • Yeong-Jun Cho, Jae-Han Park, Su-A Kim, Kyuewang Lee, Kuk-Jin Yoon
Person re-identification in large-scale multi-camera networks is a challenging task because of the spatio-temporal uncertainty and high complexity due to large numbers of cameras and people.
no code implementations • CVPR 2016 • Ju Hong Yoon, Chang-Ryeol Lee, Ming-Hsuan Yang, Kuk-Jin Yoon
In addition, to further improve the robustness of data association against mis-detections and clutters, a novel event aggregation approach is developed to integrate structural constraints in assignment costs for online MOT.
Ranked #29 on
Multiple Object Tracking
on KITTI Test (Online Methods)
(MOTA metric)
no code implementations • CVPR 2016 • Yeong-Jun Cho, Kuk-Jin Yoon
Person re-identification is the problem of recognizing people across images or videos from non-overlapping views.
no code implementations • CVPR 2015 • Min-Gyu Park, Kuk-Jin Yoon
We propose a new approach to associate supervised learning-based confidence prediction with the stereo matching problem.
no code implementations • CVPR 2015 • Jeong-Kyun Lee, Kuk-Jin Yoon
The proposed method does not require the Manhattan world assumption, and can perform a highly accurate estimation of camera orientation in real time.
no code implementations • CVPR 2014 • Seung-Hwan Bae, Kuk-Jin Yoon
We first propose the tracklet confidence using the detectability and continuity of a tracklet, and formulate a multi-object tracking problem based on the tracklet confidence.