1 code implementation • ECCV 2020 • Jae-Han Lee, Chang-Su Kim
To address these issues, we propose the loss rebalancing algorithm that initializes and rebalances the weight for each loss function adaptively in the course of training.
no code implementations • ECCV 2020 • Han-Ul Kim, Young Jun Koh, Chang-Su Kim
Especially, we propose a two-stage training scheme based on generative adversarial networks for unpaired learning.
1 code implementation • ECCV 2020 • Han-Ul Kim, Young Jun Koh, Chang-Su Kim
First, we represent various users' preferences for enhancement as feature vectors in an embedding space, called preference vectors.
1 code implementation • ECCV 2020 • Junheum Park, Keunsoo Ko, Chul Lee, Chang-Su Kim
We propose a novel deep-learning-based video interpolation algorithm based on bilateral motion estimation.
Ranked #21 on Video Frame Interpolation on MSU Video Frame Interpolation (VMAF metric)
1 code implementation • ICCV 2023 • Dongkwon Jin, Dahyun Kim, Chang-Su Kim
A novel algorithm to detect road lanes in videos, called recursive video lane detector (RVLD), is proposed in this paper, which propagates the state of a current frame recursively to the next frame.
1 code implementation • CVPR 2023 • Junheum Park, Jintae Kim, Chang-Su Kim
First, in global motion estimation, we predict symmetric bilateral motion fields at a coarse scale.
Ranked #4 on Video Frame Interpolation on X4K1000FPS
no code implementations • 20 Mar 2023 • Jinyoung Jun, Jae-Han Lee, Chang-Su Kim
A typical monocular depth estimator is trained for a single camera, so its performance drops severely on images taken with different cameras.
1 code implementation • CVPR 2023 • Seungmin Jeon, Kwang Pyo Choi, Youngo Park, Chang-Su Kim
Trit-plane coding enables deep progressive image compression, but it cannot use autoregressive context models.
1 code implementation • ICCV 2023 • Keunsoo Ko, Chang-Su Kim
Through several masked self-attention and mask update (MSAU) layers, we predict initial inpainting results.
1 code implementation • 24 Aug 2022 • Wonhui Park, Dongkwon Jin, Chang-Su Kim
Eigencontours are the first data-driven contour descriptors based on singular value decomposition.
1 code implementation • 23 Aug 2022 • Jinyoung Jun, Jae-Han Lee, Chul Lee, Chang-Su Kim
We propose a novel algorithm for monocular depth estimation that decomposes a metric depth map into a normalized depth map and scale features.
Ranked #35 on Monocular Depth Estimation on NYU-Depth V2 (using extra training data)
1 code implementation • ECCV 2020 • Dongkwon Jin, Jun-Tae Lee, Chang-Su Kim
A novel algorithm to detect semantic lines is proposed in this paper.
Ranked #2 on Line Detection on SEL
2 code implementations • CVPR 2022 • Wonhui Park, Dongkwon Jin, Chang-Su Kim
First, we construct a contour matrix containing all object boundaries in a training set.
2 code implementations • CVPR 2022 • Dongkwon Jin, Wonhui Park, Seong-Gyun Jeong, Heeyeon Kwon, Chang-Su Kim
Second, we generate a set of lane candidates by clustering the training lanes in the eigenlane space.
Ranked #28 on Lane Detection on TuSimple
no code implementations • 25 Mar 2022 • Seungmin Jeon, Jae-Han Lee, Chang-Su Kim
DPICT is the first learning-based image codec supporting fine granular scalability.
1 code implementation • CVPR 2022 • Nyeong-Ho Shin, Seon-Ho Lee, Chang-Su Kim
A novel ordinal regression algorithm, called moving window regression (MWR), is proposed in this paper.
Ranked #1 on Age Estimation on MORPH album2 (Caucasian)
1 code implementation • CVPR 2022 • Jae-Han Lee, Seungmin Jeon, Kwang Pyo Choi, Youngo Park, Chang-Su Kim
We propose the deep progressive image compression using trit-planes (DPICT) algorithm, which is the first learning-based codec supporting fine granular scalability (FGS).
1 code implementation • 13 Sep 2021 • Keunsoo Ko, Chang-Su Kim
A CNN-based interactive contrast enhancement algorithm, called IceNet, is proposed in this work, which enables a user to adjust image contrast easily according to his or her preference.
1 code implementation • ICCV 2021 • Junheum Park, Chul Lee, Chang-Su Kim
First, we predict symmetric bilateral motion fields to interpolate an anchor frame.
Ranked #8 on Video Frame Interpolation on Vimeo90K
1 code implementation • CVPR 2021 • Yuk Heo, Yeong Jun Koh, Chang-Su Kim
We propose a novel guided interactive segmentation (GIS) algorithm for video objects to improve the segmentation accuracy and reduce the interaction time.
Ranked #2 on Interactive Video Object Segmentation on DAVIS 2017 (using extra training data)
Interactive Segmentation Interactive Video Object Segmentation +3
1 code implementation • CVPR 2021 • Dongkwon Jin, Wonhui Park, Seong-Gyun Jeong, Chang-Su Kim
A novel algorithm to detect an optimal set of semantic lines is proposed in this work.
Ranked #1 on Line Detection on SEL
1 code implementation • ICCV 2021 • Jae-Han Lee, Chul Lee, Chang-Su Kim
We propose a novel loss weighting algorithm, called loss scale balancing (LSB), for multi-task learning (MTL) of pixelwise vision tasks.
1 code implementation • IEEE International Conference on Computer Vision 2021 • HanUl Kim, Su-Min Choi, Chang-Su Kim, Yeong Jun Koh
Recently, the encoder-decoder and intensity transformation approaches lead to impressive progress in image enhancement.
no code implementations • ICLR 2021 • Seon-Ho Lee, Chang-Su Kim
We propose the deep repulsive clustering (DRC) algorithm of ordered data for effective order learning.
Ranked #3 on Age Estimation on MORPH album2 (Caucasian)
1 code implementation • 17 Jul 2020 • Junheum Park, Keunsoo Ko, Chul Lee, Chang-Su Kim
We propose a novel deep-learning-based video interpolation algorithm based on bilateral motion estimation.
Ranked #3 on Video Frame Interpolation on Middlebury
4 code implementations • ECCV 2020 • Yuk Heo, Yeong Jun Koh, Chang-Su Kim
The global transfer module conveys the segmentation information in an annotated frame to a target frame, while the local transfer module propagates the segmentation information in a temporally adjacent frame to the target frame.
Ranked #3 on Interactive Video Object Segmentation on DAVIS 2017 (using extra training data)
1 code implementation • ICLR 2020 • Kyungsun Lim, Nyeong-Ho Shin, Young-Yoon Lee, Chang-Su Kim
We propose order learning to determine the order graph of classes, representing ranks or priorities, and classify an object instance into one of the classes.
Ranked #8 on Age Estimation on MORPH album2 (Caucasian)
no code implementations • ECCV 2018 • Yeong Jun Koh, Young-Yoon Lee, Chang-Su Kim
A novel algorithm to segment out objects in a video sequence is proposed in this work.
no code implementations • ECCV 2018 • Minhyeok Heo, Jae-Han Lee, Kyung-Rae Kim, Han-Ul Kim, Chang-Su Kim
We propose a monocular depth estimation algorithm, which extracts a depth map from a single image, based on whole strip masking (WSM) and reliability-based refinement.
1 code implementation • CVPR 2018 • Jae-Han Lee, Minhyeok Heo, Kyung-Rae Kim, Chang-Su Kim
We propose a deep learning algorithm for single-image depth estimation based on the Fourier frequency domain analysis.
1 code implementation • ICCV 2017 • Jun-Tae Lee, Han-Ul Kim, Chul Lee, Chang-Su Kim
Then, we develop the line pooling layer to extract a feature vector for each candidate line from the feature maps.
Ranked #3 on Line Detection on SEL
no code implementations • ICCV 2017 • Yeong Jun Koh, Chang-Su Kim
A novel online algorithm to segment multiple objects in a video sequence is proposed in this work.
no code implementations • ICCV 2017 • Se-Ho Lee, Won-Dong Jang, Chang-Su Kim
A temporal superpixel algorithm based on proximity-weighted patch matching (TS-PPM) is proposed in this work.
no code implementations • CVPR 2017 • Yeong Jun Koh, Chang-Su Kim
A novel algorithm to segment a primary object in a video sequence is proposed in this work.
no code implementations • CVPR 2017 • Won-Dong Jang, Chang-Su Kim
A semi-supervised online video object segmentation algorithm, which accepts user annotations about a target object at the first frame, is proposed in this work.
Ranked #69 on Semi-Supervised Video Object Segmentation on DAVIS 2016
no code implementations • CVPR 2017 • Se-Ho Lee, Won-Dong Jang, Chang-Su Kim
We initialize superpixel labels in each frame by transferring those in the previous frame and refine the labels to make superpixels temporally consistent as well as compatible with object contours.
no code implementations • CVPR 2016 • Won-Dong Jang, Chulwoo Lee, Chang-Su Kim
Then, we minimize a hybrid of the three energies to separate a primary object from its background.
no code implementations • CVPR 2016 • Yeong Jun Koh, Won-Dong Jang, Chang-Su Kim
By superposing the foreground and background features, we build the object recurrence model, the background model, and the primary object model.
no code implementations • ICCV 2015 • Han-Ul Kim, Dae-Youn Lee, Jae-Young Sim, Chang-Su Kim
The patch weights represent the importance of each patch in the description of foreground information, and are used to construct an object descriptor, called spatially ordered and weighted patch (SOWP) descriptor.
no code implementations • CVPR 2015 • Dae-Youn Lee, Jae-Young Sim, Chang-Su Kim
The notion of multihypothesis trajectory analysis (MTA) for robust visual tracking is proposed in this work.
no code implementations • CVPR 2015 • Chulwoo Lee, Won-Dong Jang, Jae-Young Sim, Chang-Su Kim
A graph-based system to simulate the movements and interactions of multiple random walkers (MRW) is proposed in this work.
no code implementations • CVPR 2014 • Dae-Youn Lee, Jae-Young Sim, Chang-Su Kim
A novel visual tracking algorithm using patch-based appearance models is proposed in this paper.