Search Results for author: JoonKyu Park

Found 8 papers, 3 papers with code

Rethinking RGB Color Representation for Image Restoration Models

no code implementations5 Feb 2024 Jaerin Lee, JoonKyu Park, Sungyong Baik, Kyoung Mu Lee

Image restoration models are typically trained with a pixel-wise distance loss defined over the RGB color representation space, which is well known to be a source of blurry and unrealistic textures in the restored images.

Image Restoration

Recovering 3D Hand Mesh Sequence from a Single Blurry Image: A New Dataset and Temporal Unfolding

1 code implementation CVPR 2023 Yeonguk Oh, JoonKyu Park, Jaeha Kim, Gyeongsik Moon, Kyoung Mu Lee

In addition to the new dataset, we propose BlurHandNet, a baseline network for accurate 3D hand mesh recovery from a blurry hand image.

Content-Aware Local GAN for Photo-Realistic Super-Resolution

no code implementations ICCV 2023 JoonKyu Park, Sanghyun Son, Kyoung Mu Lee

Recently, GAN has successfully contributed to making single-image super-resolution (SISR) methods produce more realistic images.

Image Super-Resolution

Pay Attention to Hidden States for Video Deblurring: Ping-Pong Recurrent Neural Networks and Selective Non-Local Attention

no code implementations30 Mar 2022 JoonKyu Park, Seungjun Nah, Kyoung Mu Lee

When motion blur is strong, however, hidden states are hard to deliver proper information due to the displacement between different frames.


HandOccNet: Occlusion-Robust 3D Hand Mesh Estimation Network

no code implementations CVPR 2022 JoonKyu Park, Yeonguk Oh, Gyeongsik Moon, Hongsuk Choi, Kyoung Mu Lee

However, we argue that occluded regions have strong correlations with hands so that they can provide highly beneficial information for complete 3D hand mesh estimation.

hand-object pose

Recurrence-in-Recurrence Networks for Video Deblurring

no code implementations12 Mar 2022 JoonKyu Park, Seungjun Nah, Kyoung Mu Lee

State-of-the-art video deblurring methods often adopt recurrent neural networks to model the temporal dependency between the frames.


Cannot find the paper you are looking for? You can Submit a new open access paper.