1 code implementation • 9 Jun 2022 • Miran Heo, Sukjun Hwang, Seoung Wug Oh, Joon-Young Lee, Seon Joo Kim
Specifically, we use an image object detector as a means of distilling object-specific contexts into object tokens.
Ranked #1 on
Video Instance Segmentation
on OVIS validation
1 code implementation • CVPR 2022 • Sukjun Hwang, Miran Heo, Seoung Wug Oh, Seon Joo Kim
The set classifier is plug-and-playable to existing object trackers, and highly improves the performance of long-tailed object tracking.
no code implementations • CVPR 2022 • Hyolim Kang, Jinwoo Kim, Taehyun Kim, Seon Joo Kim
Generic Event Boundary Detection (GEBD) is a newly suggested video understanding task that aims to find one level deeper semantic boundaries of events.
1 code implementation • CVPR 2022 • Su Ho Han, Sukjun Hwang, Seoung Wug Oh, Yeonchool Park, Hyunwoo Kim, Min-Jung Kim, Seon Joo Kim
We also introduce cooperatively operating modules that aggregate information from available frames, in order to enrich the features for all subtasks in VIS.
no code implementations • 29 Nov 2021 • Hyolim Kang, Jinwoo Kim, Taehyun Kim, Seon Joo Kim
Generic Event Boundary Detection (GEBD) is a newly suggested video understanding task that aims to find one level deeper semantic boundaries of events.
1 code implementation • 22 Jun 2021 • Hyolim Kang, Jinwoo Kim, KyungMin Kim, Taehyun Kim, Seon Joo Kim
Generic Event Boundary Detection (GEBD) is a newly introduced task that aims to detect "general" event boundaries that correspond to natural human perception.
1 code implementation • CVPR 2021 • Younghyun Jo, Seon Joo Kim
We train a deep SR network with a small receptive field and transfer the output values of the learned deep model to the LUT.
1 code implementation • CVPR 2021 • Younghyun Jo, Seoung Wug Oh, Peter Vajda, Seon Joo Kim
By the one-to-many nature of the super-resolution (SR) problem, a single low-resolution (LR) image can be mapped to many high-resolution (HR) images.
1 code implementation • NeurIPS 2021 • Sukjun Hwang, Miran Heo, Seoung Wug Oh, Seon Joo Kim
We propose a novel end-to-end solution for video instance segmentation (VIS) based on transformers.
Ranked #10 on
Video Instance Segmentation
on YouTube-VIS validation
no code implementations • CVPR 2021 • Gunhee Nam, Miran Heo, Seoung Wug Oh, Joon-Young Lee, Seon Joo Kim
Since the existing datasets are not suitable to validate our method, we build a new polygonal point set tracking dataset and demonstrate the superior performance of our method over the baselines and existing contour-based VOS methods.
no code implementations • 16 Apr 2021 • Young Hwi Kim, Seonghyeon Nam, Seon Joo Kim
Many video understanding tasks work in the offline setting by assuming that the input video is given from the start to the end.
no code implementations • ICCV 2021 • Hyolim Kang, KyungMin Kim, Yumin Ko, Seon Joo Kim
Temporal action localization has been one of the most popular tasks in video understanding, due to the importance of detecting action instances in videos.
1 code implementation • ICCV 2021 • Dongyoung Kim, Jinwoo Kim, Seonghyeon Nam, Dongwoo Lee, Yeonkyung Lee, Nahyup Kang, Hyong-Euk Lee, ByungIn Yoo, Jae-Joon Han, Seon Joo Kim
Images in our dataset are mostly captured with illuminants existing in the scene, and the ground truth illumination is computed by taking the difference between the images with different illumination combination.
no code implementations • 3 Dec 2020 • Sukjun Hwang, Seoung Wug Oh, Seon Joo Kim
Panoptic segmentation, which is a novel task of unifying instance segmentation and semantic segmentation, has attracted a lot of attention lately.
no code implementations • ECCV 2020 • Subin Jeon, Seonghyeon Nam, Seoung Wug Oh, Seon Joo Kim
To reduce the training-testing discrepancy of the self-supervised learning, a novel cross-identity training scheme is additionally introduced.
no code implementations • 3 May 2020 • Kai Zhang, Shuhang Gu, Radu Timofte, Taizhang Shang, Qiuju Dai, Shengchen Zhu, Tong Yang, Yandong Guo, Younghyun Jo, Sejong Yang, Seon Joo Kim, Lin Zha, Jiande Jiang, Xinbo Gao, Wen Lu, Jing Liu, Kwangjin Yoon, Taegyun Jeon, Kazutoshi Akita, Takeru Ooba, Norimichi Ukita, Zhipeng Luo, Yuehan Yao, Zhenyu Xu, Dongliang He, Wenhao Wu, Yukang Ding, Chao Li, Fu Li, Shilei Wen, Jianwei Li, Fuzhi Yang, Huan Yang, Jianlong Fu, Byung-Hoon Kim, JaeHyun Baek, Jong Chul Ye, Yuchen Fan, Thomas S. Huang, Junyeop Lee, Bokyeung Lee, Jungki Min, Gwantae Kim, Kanghyu Lee, Jaihyun Park, Mykola Mykhailych, Haoyu Zhong, Yukai Shi, Xiaojun Yang, Zhijing Yang, Liang Lin, Tongtong Zhao, Jinjia Peng, Huibing Wang, Zhi Jin, Jiahao Wu, Yifu Chen, Chenming Shang, Huanrong Zhang, Jeongki Min, Hrishikesh P. S, Densen Puthussery, Jiji C. V
This paper reviews the NTIRE 2020 challenge on perceptual extreme super-resolution with focus on proposed solutions and results.
1 code implementation • ECCV 2020 • Jaeyeon Kang, Younghyun Jo, Seoung Wug Oh, Peter Vajda, Seon Joo Kim
Video super-resolution (VSR) and frame interpolation (FI) are traditional computer vision problems, and the performance have been improving by incorporating deep learning recently.
no code implementations • 20 Mar 2020 • Gunhee Nam, Seoung Wug Oh, Joon-Young Lee, Seon Joo Kim
We propose a novel memory-based tracker via part-level dense memory and voting-based retrieval, called DMV.
no code implementations • 20 Mar 2020 • Younghyun Jo, Jaeyeon Kang, Seoung Wug Oh, Seonghyeon Nam, Peter Vajda, Seon Joo Kim
Our framework is similar to GANs in that we iteratively train two networks - a generator and a loss network.
1 code implementation • NeurIPS 2019 • Yunji Kim, Seonghyeon Nam, In Cho, Seon Joo Kim
To generate future frames, we first detect keypoints of a moving object and predict future motion as a sequence of keypoints.
1 code implementation • ICCV 2019 • Sungho Lee, Seoung Wug Oh, DaeYeun Won, Seon Joo Kim
We propose a novel DNN-based framework called the Copy-and-Paste Networks for video inpainting that takes advantage of additional information in other frames of the video.
Ranked #4 on
Video Inpainting
on YouTube-VOS 2018 val
1 code implementation • ICCV 2019 • Seoung Wug Oh, Sungho Lee, Joon-Young Lee, Seon Joo Kim
Given a set of reference images and a target image with holes, our network fills the hole by referring the contents in the reference images.
1 code implementation • CVPR 2019 • Seoung Wug Oh, Joon-Young Lee, Ning Xu, Seon Joo Kim
We propose a new multi-round training scheme for the interactive video object segmentation so that the networks can learn how to understand the user's intention and update incorrect estimations during the training.
Ranked #6 on
Interactive Video Object Segmentation
on DAVIS 2017
(AUC-J metric)
Interactive Video Object Segmentation
Semantic Segmentation
+1
3 code implementations • ICCV 2019 • Seoung Wug Oh, Joon-Young Lee, Ning Xu, Seon Joo Kim
In our framework, the past frames with object masks form an external memory, and the current frame as the query is segmented using the mask information in the memory.
Ranked #4 on
Interactive Video Object Segmentation
on DAVIS 2017
(using extra training data)
Interactive Video Object Segmentation
One-shot visual object segmentation
+2
no code implementations • CVPR 2019 • Seonghyeon Nam, Chongyang Ma, Menglei Chai, William Brendel, Ning Xu, Seon Joo Kim
Time-lapse videos usually contain visually appealing content but are often difficult and costly to create.
no code implementations • NeurIPS 2018 • Seonghyeon Nam, Yunji Kim, Seon Joo Kim
Our task aims to semantically modify visual attributes of an object in an image according to the text describing the new visual appearance.
no code implementations • ECCV 2018 • Minho Shim, Young Hwi Kim, Kyung-Min Kim, Seon Joo Kim
A major obstacle in teaching machines to understand videos is the lack of training data, as creating temporal annotations for long videos requires a huge amount of human effort.
1 code implementation • CVPR 2018 • Younghyun Jo, Seoung Wug Oh, Jaeyeon Kang, Seon Joo Kim
We propose a novel end-to-end deep neural network that generates dynamic upsampling filters and a residual image, which are computed depending on the local spatio-temporal neighborhood of each pixel to avoid explicit motion compensation.
Ranked #3 on
Video Super-Resolution
on Vid4 - 4x upscaling
2 code implementations • CVPR 2018 • Seoung Wug Oh, Joon-Young Lee, Kalyan Sunkavalli, Seon Joo Kim
We validate our method on four benchmark sets that cover single and multiple object segmentation.
2 code implementations • CVPR 2018 • Changha Shin, Hae-Gon Jeon, Youngjin Yoon, In So Kweon, Seon Joo Kim
Light field cameras capture both the spatial and the angular properties of light rays in space.
no code implementations • ICCV 2017 • Seonghyeon Nam, Seon Joo Kim
Often called as the radiometric calibration, the process of recovering RAW images from processed images (JPEG format in the sRGB color space) is essential for many computer vision tasks that rely on physically accurate radiance values.
no code implementations • 26 Jun 2017 • Seonghyeon Nam, Seon Joo Kim
Also, spatially varying photo adjustment methods have been studied by exploiting high-level features and semantic label maps.
1 code implementation • 22 May 2017 • Hye-Rin Kim, Yeong-Seok Kim, Seon Joo Kim, In-Kwon Lee
In this paper, we focus on two high level features, the object and the background, and assume that the semantic information of images is a good cue for predicting emotion.
no code implementations • 29 Aug 2016 • Seoung Wug Oh, Seon Joo Kim
Computational color constancy refers to the problem of computing the illuminant color so that the images of a scene under varying illumination can be normalized to an image under the canonical illumination.
no code implementations • CVPR 2016 • Seonghyeon Nam, Youngbae Hwang, Yasuyuki Matsushita, Seon Joo Kim
Modelling and analyzing noise in images is a fundamental task in many computer vision systems.
no code implementations • CVPR 2016 • Seoung Wug Oh, Michael S. Brown, Marc Pollefeys, Seon Joo Kim
In particular, due to the differences in spectral sensitivities of the cameras, different cameras yield different RGB measurements for the same spectral signal.
no code implementations • ICCV 2015 • Hae-Gon Jeon, Joon-Young Lee, Yudeog Han, Seon Joo Kim, In So Kweon
In this paper, we present a novel multi-image motion deblurring method utilizing the coded exposure technique.
1 code implementation • Pacific Graphics 2014 • Rang Nguyen, Seon Joo Kim, Michael S. Brown
Our method is unique in its considera- tion of the scene illumination and the constraint that the mapped image must be within the color gamut of the target image.
no code implementations • CVPR 2014 • Youngbae Hwang, Joon-Young Lee, In So Kweon, Seon Joo Kim
This paper introduces a new color transfer method which is a process of transferring color of an image to match the color of another image of the same scene.