no code implementations • 11 Nov 2024 • Nvidia, :, Yuval Atzmon, Maciej Bala, Yogesh Balaji, Tiffany Cai, Yin Cui, Jiaojiao Fan, Yunhao Ge, Siddharth Gururani, Jacob Huffman, Ronald Isaac, Pooya Jannaty, Tero Karras, Grace Lam, J. P. Lewis, Aaron Licata, Yen-Chen Lin, Ming-Yu Liu, Qianli Ma, Arun Mallya, Ashlee Martino-Tarr, Doug Mendez, Seungjun Nah, Chris Pruett, Fitsum Reda, Jiaming Song, Ting-Chun Wang, Fangyin Wei, Xiaohui Zeng, Yu Zeng, Qinsheng Zhang
We introduce Edify Image, a family of diffusion models capable of generating photorealistic image content with pixel-perfect accuracy.
no code implementations • ICCV 2023 • Songwei Ge, Seungjun Nah, Guilin Liu, Tyler Poon, Andrew Tao, Bryan Catanzaro, David Jacobs, Jia-Bin Huang, Ming-Yu Liu, Yogesh Balaji
Despite tremendous progress in generating high-quality images using diffusion models, synthesizing a sequence of animated frames that are both photorealistic and temporally coherent is still in its infancy.
Ranked #8 on Text-to-Video Generation on UCF-101
2 code implementations • 2 Nov 2022 • Yogesh Balaji, Seungjun Nah, Xun Huang, Arash Vahdat, Jiaming Song, Qinsheng Zhang, Karsten Kreis, Miika Aittala, Timo Aila, Samuli Laine, Bryan Catanzaro, Tero Karras, Ming-Yu Liu
Therefore, in contrast to existing works, we propose to train an ensemble of text-to-image diffusion models specialized for different synthesis stages.
Ranked #14 on Text-to-Image Generation on MS COCO
1 code implementation • 21 Jul 2022 • Cheeun Hong, Sungyong Baik, Heewon Kim, Seungjun Nah, Kyoung Mu Lee
In this work, to achieve high average bit-reduction with less accuracy loss, we propose a novel Content-Aware Dynamic Quantization (CADyQ) method for SR networks that allocates optimal bits to local regions and layers adaptively based on the local contents of an input image.
1 code implementation • CVPR 2022 • Junghun Oh, Heewon Kim, Seungjun Nah, Cheeun Hong, Jonghyun Choi, Kyoung Mu Lee
Image restoration tasks have witnessed great performance improvement in recent years by developing large deep models.
no code implementations • 30 Mar 2022 • JoonKyu Park, Seungjun Nah, Kyoung Mu Lee
When motion blur is strong, however, hidden states are hard to deliver proper information due to the displacement between different frames.
no code implementations • 12 Mar 2022 • JoonKyu Park, Seungjun Nah, Kyoung Mu Lee
State-of-the-art video deblurring methods often adopt recurrent neural networks to model the temporal dependency between the frames.
no code implementations • 30 Apr 2021 • Sanghyun Son, Suyoung Lee, Seungjun Nah, Radu Timofte, Kyoung Mu Lee
Super-Resolution (SR) is a fundamental computer vision task that aims to obtain a high-resolution clean image from the given low-resolution counterpart.
no code implementations • 30 Apr 2021 • Seungjun Nah, Sanghyun Son, Suyoung Lee, Radu Timofte, Kyoung Mu Lee
In this challenge report, we describe the challenge specifics and the evaluation results from the 2 competition tracks with the proposed solutions.
no code implementations • ICLR 2022 • Seungjun Nah, Sanghyun Son, Jaerin Lee, Kyoung Mu Lee
The supervised reblurring loss at training stage compares the amplified blur between the deblurred and the sharp images.
no code implementations • 28 Sep 2020 • Sanghyun Son, Jaerin Lee, Seungjun Nah, Radu Timofte, Kyoung Mu Lee
Videos in the real-world contain various dynamics and motions that may look unnaturally discontinuous in time when the recordedframe rate is low.
no code implementations • 4 May 2020 • Seungjun Nah, Sanghyun Son, Radu Timofte, Kyoung Mu Lee
Videos contain various types and strengths of motions that may look unnaturally discontinuous in time when the recorded frame rate is low.
no code implementations • 4 May 2020 • Seungjun Nah, Sanghyun Son, Radu Timofte, Kyoung Mu Lee
This paper reviews the NTIRE 2020 Challenge on Image and Video Deblurring.
no code implementations • CVPR 2019 • Seungjun Nah, Sanghyun Son, Kyoung Mu Lee
In this work, we aim to improve the accuracy of recurrent models by adapting the hidden states transferred from past frames to the frame being processed so that the relations between video frames could be better used.
1 code implementation • ECCV 2018 • Sanghyun Son, Seungjun Nah, Kyoung Mu Lee
In this paper, we propose a novel method to compress CNNs by reconstructing the network from a small set of spatial convolution kernels.
46 code implementations • 10 Jul 2017 • Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, Kyoung Mu Lee
Recent research on super-resolution has progressed with the development of deep convolutional neural networks (DCNN).
Ranked #1 on Image Super-Resolution on DIV2K val - 4x upscaling (PSNR metric)
1 code implementation • CVPR 2017 • Seungjun Nah, Tae Hyun Kim, Kyoung Mu Lee
To remove these complicated motion blurs, conventional energy optimization based methods rely on simple assumptions such that blur kernel is partially uniform or locally linear.
Ranked #18 on Deblurring on RealBlur-R (trained on GoPro) (SSIM (sRGB) metric)
no code implementations • 14 Mar 2016 • Tae Hyun Kim, Seungjun Nah, Kyoung Mu Lee
We infer bidirectional optical flows to handle motion blurs, and also estimate Gaussian blur maps to remove optical blur from defocus in our new blur model.