Search Results for author: Seungjun Nah

Found 18 papers, 6 papers with code

Preserve Your Own Correlation: A Noise Prior for Video Diffusion Models

no code implementations ICCV 2023 Songwei Ge, Seungjun Nah, Guilin Liu, Tyler Poon, Andrew Tao, Bryan Catanzaro, David Jacobs, Jia-Bin Huang, Ming-Yu Liu, Yogesh Balaji

Despite tremendous progress in generating high-quality images using diffusion models, synthesizing a sequence of animated frames that are both photorealistic and temporally coherent is still in its infancy.

Image Generation Text-to-Video Generation +1

CADyQ: Content-Aware Dynamic Quantization for Image Super-Resolution

1 code implementation21 Jul 2022 Cheeun Hong, Sungyong Baik, Heewon Kim, Seungjun Nah, Kyoung Mu Lee

In this work, to achieve high average bit-reduction with less accuracy loss, we propose a novel Content-Aware Dynamic Quantization (CADyQ) method for SR networks that allocates optimal bits to local regions and layers adaptively based on the local contents of an input image.

Image Super-Resolution Quantization

Pay Attention to Hidden States for Video Deblurring: Ping-Pong Recurrent Neural Networks and Selective Non-Local Attention

no code implementations30 Mar 2022 JoonKyu Park, Seungjun Nah, Kyoung Mu Lee

When motion blur is strong, however, hidden states are hard to deliver proper information due to the displacement between different frames.

Deblurring Video Deblurring

Recurrence-in-Recurrence Networks for Video Deblurring

no code implementations12 Mar 2022 JoonKyu Park, Seungjun Nah, Kyoung Mu Lee

State-of-the-art video deblurring methods often adopt recurrent neural networks to model the temporal dependency between the frames.

Deblurring Video Deblurring

NTIRE 2021 Challenge on Video Super-Resolution

no code implementations30 Apr 2021 Sanghyun Son, Suyoung Lee, Seungjun Nah, Radu Timofte, Kyoung Mu Lee

Super-Resolution (SR) is a fundamental computer vision task that aims to obtain a high-resolution clean image from the given low-resolution counterpart.

Video Super-Resolution

NTIRE 2021 Challenge on Image Deblurring

no code implementations30 Apr 2021 Seungjun Nah, Sanghyun Son, Suyoung Lee, Radu Timofte, Kyoung Mu Lee

In this challenge report, we describe the challenge specifics and the evaluation results from the 2 competition tracks with the proposed solutions.

Deblurring Image Deblurring

AIM 2020 Challenge on Video Temporal Super-Resolution

no code implementations28 Sep 2020 Sanghyun Son, Jaerin Lee, Seungjun Nah, Radu Timofte, Kyoung Mu Lee

Videos in the real-world contain various dynamics and motions that may look unnaturally discontinuous in time when the recordedframe rate is low.

Super-Resolution

AIM 2019 Challenge on Video Temporal Super-Resolution: Methods and Results

no code implementations4 May 2020 Seungjun Nah, Sanghyun Son, Radu Timofte, Kyoung Mu Lee

Videos contain various types and strengths of motions that may look unnaturally discontinuous in time when the recorded frame rate is low.

Super-Resolution

Recurrent Neural Networks With Intra-Frame Iterations for Video Deblurring

no code implementations CVPR 2019 Seungjun Nah, Sanghyun Son, Kyoung Mu Lee

In this work, we aim to improve the accuracy of recurrent models by adapting the hidden states transferred from past frames to the frame being processed so that the relations between video frames could be better used.

Deblurring Video Deblurring

Clustering Convolutional Kernels to Compress Deep Neural Networks

1 code implementation ECCV 2018 Sanghyun Son, Seungjun Nah, Kyoung Mu Lee

In this paper, we propose a novel method to compress CNNs by reconstructing the network from a small set of spatial convolution kernels.

Clustering General Classification +1

Enhanced Deep Residual Networks for Single Image Super-Resolution

46 code implementations10 Jul 2017 Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, Kyoung Mu Lee

Recent research on super-resolution has progressed with the development of deep convolutional neural networks (DCNN).

Image Super-Resolution Spectral Reconstruction

Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring

1 code implementation CVPR 2017 Seungjun Nah, Tae Hyun Kim, Kyoung Mu Lee

To remove these complicated motion blurs, conventional energy optimization based methods rely on simple assumptions such that blur kernel is partially uniform or locally linear.

Ranked #18 on Deblurring on RealBlur-R (trained on GoPro) (SSIM (sRGB) metric)

Deblurring Image Deblurring

Dynamic Scene Deblurring using a Locally Adaptive Linear Blur Model

no code implementations14 Mar 2016 Tae Hyun Kim, Seungjun Nah, Kyoung Mu Lee

We infer bidirectional optical flows to handle motion blurs, and also estimate Gaussian blur maps to remove optical blur from defocus in our new blur model.

Deblurring Optical Flow Estimation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.