1 code implementation • CVPR 2024 • Changjin Kim, Tae Hyun Kim, Sungyong Baik
Removing noise from images, a. k. a image denoising, can be a very challenging task since the type and amount of noise can greatly vary for each image due to many factors including a camera model and capturing environments.
2 code implementations • 4 Dec 2024 • Eun Woo Im, Junsung Shin, Sungyong Baik, Tae Hyun Kim
To account for such uncertainties and factors involved in haze degradation, we introduce a variational Bayesian framework for single image dehazing.
1 code implementation • CVPR 2024 • Muhammad Kashif Ali, Eun Woo Im, DongJin Kim, Tae Hyun Kim
Video stabilization is a longstanding computer vision problem, particularly pixel-level synthesis solutions for video stabilization which synthesize full frames add to the complexity of this task.
no code implementations • 20 Nov 2023 • Young Jae Oh, Jihun Kim, Tae Hyun Kim
Single image super-resolution (SISR) has experienced significant advancements, primarily driven by deep convolutional networks.
1 code implementation • 31 Jan 2023 • Seunghwan Lee, Tae Hyun Kim
Although several real-world noisy datasets have been presented, the number of train datasets (i. e., pairs of clean and real noisy images) is limited, and acquiring more real noise datasets is laborious and expensive.
no code implementations • ICCV 2023 • Eunhye Lee, Jinsu Yoo, Yunjeong Yang, Sungyong Baik, Tae Hyun Kim
Recent learning-based video inpainting approaches have achieved considerable progress.
no code implementations • ICCV 2023 • Muhammad Kashif Ali, DongJin Kim, Tae Hyun Kim
In many video restoration/translation tasks, image processing operations are na\"ively extended to the video domain by processing each frame independently, disregarding the temporal connection of the video frames.
1 code implementation • 15 Mar 2022 • Jinsu Yoo, TaeHoon Kim, Sihaeng Lee, Seung Hwan Kim, Honglak Lee, Tae Hyun Kim
Recent transformer-based super-resolution (SR) methods have achieved promising results against conventional CNN-based methods.
no code implementations • 25 Oct 2021 • Eunhye Lee, Jeongmu Kim, Jisu Kim, Tae Hyun Kim
Recent image inpainting methods have shown promising results due to the power of deep learning, which can explore external information available from the large training dataset.
1 code implementation • 18 Mar 2021 • Jinsu Yoo, Tae Hyun Kim
Recent single-image super-resolution (SISR) networks, which can adapt their network parameters to specific input images, have shown promising results by exploiting the information available within the input data as well as large external datasets.
no code implementations • 16 Feb 2021 • Eunhye Lee, Jeongmu Kim, Jisu Kim, Tae Hyun Kim
Recent image inpainting methods show promising results due to the power of deep learning, which can explore external information available from a large training dataset.
no code implementations • 22 Jan 2021 • Seobin Park, Tae Hyun Kim
We propose a new approach for the image super-resolution (SR) task that progressively restores a high-resolution (HR) image from an input low-resolution (LR) image on the basis of a neural ordinary differential equation.
1 code implementation • 19 Nov 2020 • Muhammad Kashif Ali, Sangjoon Yu, Tae Hyun Kim
Despite the advances in the field of generative models in computer vision, video stabilization still lacks a pure regressive deep-learning-based formulation.
1 code implementation • CVPR 2020 • Myungsub Choi, Janghoon Choi, Sungyong Baik, Tae Hyun Kim, Kyoung Mu Lee
Finally, we show that our meta-learning framework can be easily employed to any video frame interpolation network and can consistently improve its performance on multiple benchmark datasets.
no code implementations • 9 Mar 2020 • Seunghwan Lee, Dongkyu Lee, Donghyeon Cho, Jiwon Kim, Tae Hyun Kim
However, these methods have limitations in using internal information available in a given test image.
no code implementations • CVPR 2021 • Seunghwan Lee, Donghyeon Cho, Jiwon Kim, Tae Hyun Kim
We analyze the restoration performance of the fine-tuned video denoising networks with the proposed self-supervision-based learning algorithm, and demonstrate that the FCN can utilize recurring patches without requiring accurate registration among adjacent frames.
no code implementations • 9 Jan 2020 • Seunghwan Lee, Donghyeon Cho, Jiwon Kim, Tae Hyun Kim
Under certain statistical assumptions of noise, recent self-supervised approaches for denoising have been introduced to learn network parameters without true clean images, and these methods can restore an image by exploiting information available from the given input (i. e., internal statistics) at test time.
1 code implementation • ECCV 2020 • Seobin Park, Jinsu Yoo, Donghyeon Cho, Jiwon Kim, Tae Hyun Kim
In the training stage, we train the network via meta-learning; thus, the network can quickly adapt to any input image at test time.
no code implementations • 31 Mar 2019 • Jonathan Samuel Lumentut, Tae Hyun Kim, Ravi Ramamoorthi, In Kyu Park
Restoring a sharp light field image from its blurry input has become essential due to the increasing popularity of parallax-based image processing.
no code implementations • ECCV 2018 • Tae Hyun Kim, Mehdi S. M. Sajjadi, Michael Hirsch, Bernhard Scholkopf
State-of-the-art video restoration methods integrate optical flow estimation networks to utilize temporal information.
no code implementations • ICCV 2017 • Tae Hyun Kim, Kyoung Mu Lee, Bernhard Schölkopf, Michael Hirsch
We show the superiority of the proposed method in an extensive experimental evaluation.
1 code implementation • CVPR 2017 • Seungjun Nah, Tae Hyun Kim, Kyoung Mu Lee
To remove these complicated motion blurs, conventional energy optimization based methods rely on simple assumptions such that blur kernel is partially uniform or locally linear.
Ranked #18 on
Deblurring
on RealBlur-R (trained on GoPro)
(SSIM (sRGB) metric)
no code implementations • 29 Nov 2016 • Byeongjoo Ahn, Tae Hyun Kim, Wonsik Kim, Kyoung Mu Lee
We also provide a novel analysis on the blur kernel at object boundaries, which shows the distinctive characteristics of the blur kernel that cannot be captured by conventional blur models.
no code implementations • 14 Mar 2016 • Tae Hyun Kim, Seungjun Nah, Kyoung Mu Lee
We infer bidirectional optical flows to handle motion blurs, and also estimate Gaussian blur maps to remove optical blur from defocus in our new blur model.
no code implementations • CVPR 2015 • Tae Hyun Kim, Kyoung Mu Lee
We propose a video deblurring method to deal with general blurs inherent in dynamic scenes, contrary to other methods.
no code implementations • CVPR 2014 • Tae Hyun Kim, Kyoung Mu Lee
Thus, we propose a new energy model simultaneously estimating motion flow and the latent image based on robust total variation (TV)-L1 model.