no code implementations • 27 Jun 2024 • Sonam Gupta, Snehal Singh Tomar, Grigorios G Chrysos, Sukhendu Das, A. N. Rajagopalan
Extracting Implicit Neural Representations (INRs) on video data poses unique challenges due to the additional temporal dimension.
no code implementations • 9 Feb 2024 • Maitreya Suin, Kuldeep Purohit, A. N. Rajagopalan
This paper tackles the problem of motion deblurring of dynamic scenes.
no code implementations • 22 Dec 2023 • Snehal Singh Tomar, A. N. Rajagopalan
Consequentially, style editing of the chosen ROIs amounts to a simple combination of (a) the ROI-mask generated from the sliced structure representation and (b) the decoded image with global style changes, generated from the manipulated (using Gaussian noise) global style and unchanged structure tensor.
no code implementations • 5 Jun 2023 • Praveen Kandula, Maitreya Suin, A. N. Rajagopalan
Different ablation studies show the importance of PAM and CIN in improving the visible quality of the image.
no code implementations • 5 Jun 2023 • Praveen Kandula, A. N. Rajagopalan
We then propose the use of knowledge distillation to train a restoration network using the generated image pairs.
no code implementations • 5 Jun 2023 • Praveen Kandula, A. N. Rajagopalan
Several supervised networks exist that remove haze information from underwater images using paired datasets and pixel-wise loss functions.
no code implementations • CVPR 2023 • Aakanksha, A. N. Rajagopalan
Semantic segmentation involves classifying each pixel into one of a pre-defined set of object/stuff classes.
1 code implementation • ICCV 2023 • Nisha Varghese, Ashish Kumar, A. N. Rajagopalan
To obtain improved estimates of depth from a single UW image, we propose a deep learning (DL) method that utilizes both haze and geometry during training.
no code implementations • 21 Nov 2022 • Snehal Singh Tomar, Maitreya Suin, A. N. Rajagopalan
Both inversion of real images and determination of controllable latent directions are computationally expensive operations.
no code implementations • 20 Nov 2022 • Snehal Singh Tomar, Maitreya Suin, A. N. Rajagopalan
Our model fuses per-pixel local information learned using two fully convolutional depth encoders with global contextual information learned by a transformer encoder at different scales.
no code implementations • 5 Jul 2022 • Snehal Singh Tomar, A. N. Rajagopalan
Our endeavour in this work is to do away with the priors and complex pre-processing operations required by SOTA multi-class face segmentation models by reframing this operation as a downstream task post infusion of disentanglement with respect to facial semantic regions of interest (ROIs) in the latent space of a Generative Autoencoder model.
no code implementations • 28 Jan 2022 • Kuldeep Purohit, Srimanta Mandal, A. N. Rajagopalan
To enable super-resolution for multiple factors, we propose a scale-recurrent framework which reutilizes the filters learnt for lower scale factors recursively for higher factors.
no code implementations • 28 Jan 2022 • Kuldeep Purohit, Srimanta Mandal, A. N. Rajagopalan
In this paper, we propose a scale recurrent SR architecture built upon units containing series of dense connections within a residual block (Residual Dense Blocks (RDBs)) that allow extraction of abundant local features from the image.
no code implementations • 28 Jan 2022 • Kuldeep Purohit, Anshul Shah, A. N. Rajagopalan
This network extracts embedded motion information from the blurred image to generate a sharp video in conjunction with the trained recurrent video decoder.
no code implementations • 1 Jan 2022 • Maitreya Suin, Kuldeep Purohit, A. N. Rajagopalan
Image restoration is the task of recovering a clean image from a degraded version.
no code implementations • 1 Jan 2022 • Maitreya Suin, Kuldeep Purohit, A. N. Rajagopalan
We deploy cross and self distillation techniques and discuss the need for a dedicated completion-block in encoder to achieve the distillation target.
no code implementations • 1 Jan 2022 • Maitreya Suin, A. N. Rajagopalan
This paper tackles the challenging problem of video deblurring.
no code implementations • 1 Jan 2022 • Maitreya Suin, Kuldeep Purohit, A. N. Rajagopalan
This paper tackles the problem of dynamic scene deblurring.
no code implementations • 14 Dec 2021 • Srimanta Mandal, Kuldeep Purohit, A. N. Rajagopalan
In practice, images can contain different amounts of noise for different color channels, which is not acknowledged by existing super-resolution approaches.
no code implementations • 12 Dec 2021 • Praveen K, Lokesh Kumar T, A. N. Rajagopalan
The motion block predicts camera pose for every row of the input RS distorted image while the trajectory module fits estimated motion parameters to a third-order polynomial.
no code implementations • 23 Nov 2021 • Nisha Varghese, Mahesh Mohan M. R., A. N. Rajagopalan
Such datasets which are a rarity can be a valuable asset for contemporary deep learning methods.
no code implementations • ICCV 2021 • Kuldeep Purohit, Maitreya Suin, A. N. Rajagopalan, Vishnu Naresh Boddeti
However, we hypothesize that such spatially rigid processing is suboptimal for simultaneously restoring the degraded pixels as well as reconstructing the clean regions of the image.
no code implementations • CVPR 2021 • Maitreya Suin, A. N. Rajagopalan
Video deblurring remains a challenging task due to the complexity of spatially and temporally varying blur.
no code implementations • ICCV 2021 • Maitreya Suin, Kuldeep Purohit, A. N. Rajagopalan
Image inpainting methods have shown significant improvements by using deep neural networks recently.
1 code implementation • ICCV 2021 • Kranthi Kumar Rachavarapu, Aakanksha, Vignesh Sundaresha, A. N. Rajagopalan
Through user study, we further validate that our proposed approach generates binaural-quality audio using as little as 10% of explicit binaural supervision data for the SG network.
no code implementations • 10 Nov 2020 • Andrey Ignatov, Radu Timofte, Ming Qian, Congyu Qiao, Jiamin Lin, Zhenyu Guo, Chenghua Li, Cong Leng, Jian Cheng, Juewen Peng, Xianrui Luo, Ke Xian, Zijin Wu, Zhiguo Cao, Densen Puthussery, Jiji C V, Hrishikesh P S, Melvin Kuriakose, Saikat Dutta, Sourya Dipta Das, Nisarg A. Shah, Kuldeep Purohit, Praveen Kandula, Maitreya Suin, A. N. Rajagopalan, Saagara M B, Minnu A L, Sanjana A R, Praseeda S, Ge Wu, Xueqin Chen, Tengyao Wang, Max Zheng, Hulk Wong, Jay Zou
This paper reviews the second AIM realistic bokeh effect rendering challenge and provides the description of the proposed solutions and results.
2 code implementations • 27 Sep 2020 • Majed El Helou, Ruofan Zhou, Sabine Süsstrunk, Radu Timofte, Mahmoud Afifi, Michael S. Brown, Kele Xu, Hengxing Cai, Yuzhong Liu, Li-Wen Wang, Zhi-Song Liu, Chu-Tak Li, Sourya Dipta Das, Nisarg A. Shah, Akashdeep Jassal, Tongtong Zhao, Shanshan Zhao, Sabari Nathan, M. Parisa Beham, R. Suganya, Qing Wang, Zhongyun Hu, Xin Huang, Yaning Li, Maitreya Suin, Kuldeep Purohit, A. N. Rajagopalan, Densen Puthussery, Hrishikesh P. S, Melvin Kuriakose, Jiji C. V, Yu Zhu, Liping Dong, Zhuolong Jiang, Chenghua Li, Cong Leng, Jian Cheng
The first track considered one-to-one relighting; the objective was to relight an input photo of a scene with a different color temperature and illuminant orientation (i. e., light source position).
1 code implementation • 24 Sep 2020 • Priyatham Kattakinda, A. N. Rajagopalan
A majority of methods for image denoising are no exception to this rule and hence demand pairs of noisy and corresponding clean images.
Image and Video Processing
3 code implementations • 15 Sep 2020 • Kai Zhang, Martin Danelljan, Yawei Li, Radu Timofte, Jie Liu, Jie Tang, Gangshan Wu, Yu Zhu, Xiangyu He, Wenjie Xu, Chenghua Li, Cong Leng, Jian Cheng, Guangyang Wu, Wenyi Wang, Xiaohong Liu, Hengyuan Zhao, Xiangtao Kong, Jingwen He, Yu Qiao, Chao Dong, Maitreya Suin, Kuldeep Purohit, A. N. Rajagopalan, Xiaochuan Li, Zhiqiang Lang, Jiangtao Nie, Wei Wei, Lei Zhang, Abdul Muqeet, Jiwon Hwang, Subin Yang, JungHeum Kang, Sung-Ho Bae, Yongwoo Kim, Geun-Woo Jeon, Jun-Ho Choi, Jun-Hyuk Kim, Jong-Seok Lee, Steven Marty, Eric Marty, Dongliang Xiong, Siang Chen, Lin Zha, Jiande Jiang, Xinbo Gao, Wen Lu, Haicheng Wang, Vineeth Bhaskara, Alex Levinshtein, Stavros Tsogkas, Allan Jepson, Xiangzhen Kong, Tongtong Zhao, Shanshan Zhao, Hrishikesh P. S, Densen Puthussery, Jiji C. V, Nan Nan, Shuai Liu, Jie Cai, Zibo Meng, Jiaming Ding, Chiu Man Ho, Xuehui Wang, Qiong Yan, Yuzhi Zhao, Long Chen, Jiangtao Zhang, Xiaotong Luo, Liang Chen, Yanyun Qu, Long Sun, Wenhao Wang, Zhenbing Liu, Rushi Lan, Rao Muhammad Umer, Christian Micheloni
This paper reviews the AIM 2020 challenge on efficient single image super-resolution with focus on the proposed solutions and results.
no code implementations • CVPR 2020 • Maitreya Suin, Kuldeep Purohit, A. N. Rajagopalan
Existing approaches achieve a large receptive field by increasing the number of generic convolution layers and kernel-size, but this comes at the expense of of the increase in model size and inference speed.
Ranked #33 on Image Deblurring on GoPro (using extra training data)
2 code implementations • AAAI Conference on Artificial Intelligence 2020 • Kuldeep Purohit, A. N. Rajagopalan
In this paper, we address the problem of dynamic scene deblurring in the presence of motion blur.
1 code implementation • 18 Nov 2019 • Andreas Lugmayr, Martin Danelljan, Radu Timofte, Manuel Fritsche, Shuhang Gu, Kuldeep Purohit, Praveen Kandula, Maitreya Suin, A. N. Rajagopalan, Nam Hyung Joon, Yu Seung Won, Guisik Kim, Dokyeong Kwon, Chih-Chung Hsu, Chia-Hsiang Lin, Yuanfei Huang, Xiaopeng Sun, Wen Lu, Jie Li, Xinbo Gao, Sefi Bell-Kligler
For training, only one set of source input images is therefore provided in the challenge.
no code implementations • 8 Nov 2019 • Shanxin Yuan, Radu Timofte, Gregory Slabaugh, Ales Leonardis, Bolun Zheng, Xin Ye, Xiang Tian, Yaowu Chen, Xi Cheng, Zhen-Yong Fu, Jian Yang, Ming Hong, Wenying Lin, Wenjin Yang, Yanyun Qu, Hong-Kyu Shin, Joon-Yeon Kim, Sung-Jea Ko, Hang Dong, Yu Guo, Jie Wang, Xuan Ding, Zongyan Han, Sourya Dipta Das, Kuldeep Purohit, Praveen Kandula, Maitreya Suin, A. N. Rajagopalan
A new dataset, called LCDMoire was created for this challenge, and consists of 10, 200 synthetically generated image pairs (moire and clean ground truth).
no code implementations • 7 Apr 2019 • Kuldeep Purohit, Subeesh Vasu, M. Purnachandra Rao, A. N. Rajagopalan
We first propose an approach for estimation of normal of a planar scene from a single motion blurred observation.
no code implementations • 25 Mar 2019 • Kuldeep Purohit, A. N. Rajagopalan
In this paper, we address the problem of dynamic scene deblurring in the presence of motion blur.
Ranked #32 on Image Deblurring on GoPro (using extra training data)
no code implementations • 3 Oct 2018 • Andrey Ignatov, Radu Timofte, Thang Van Vu, Tung Minh Luu, Trung X. Pham, Cao Van Nguyen, Yongwoo Kim, Jae-Seok Choi, Munchurl Kim, Jie Huang, Jiewen Ran, Chen Xing, Xingguang Zhou, Pengfei Zhu, Mingrui Geng, Yawei Li, Eirikur Agustsson, Shuhang Gu, Luc van Gool, Etienne de Stoutz, Nikolay Kobyshev, Kehui Nie, Yan Zhao, Gen Li, Tong Tong, Qinquan Gao, Liu Hanwen, Pablo Navarrete Michelini, Zhu Dan, Hu Fengshuo, Zheng Hui, Xiumei Wang, Lirui Deng, Rang Meng, Jinghui Qin, Yukai Shi, Wushao Wen, Liang Lin, Ruicheng Feng, Shixiang Wu, Chao Dong, Yu Qiao, Subeesh Vasu, Nimisha Thekke Madam, Praveen Kandula, A. N. Rajagopalan, Jie Liu, Cheolkon Jung
This paper reviews the first challenge on efficient perceptual image enhancement with the focus on deploying deep learning models on smartphones.
no code implementations • ECCV 2018 • Thekke Madam Nimisha, Kumar Sunil, A. N. Rajagopalan
To improve the stability of GAN and to preserve the image correspondence, we introduce an additional CNN module that reblurs the generated GAN output to match with the blurred input.
no code implementations • CVPR 2018 • Subeesh Vasu, Mahesh Mohan M. R., A. N. Rajagopalan
Due to the sequential mechanism, images acquired with a moving camera are subjected to rolling shutter effect which manifests as geometric distortions.
no code implementations • CVPR 2018 • Subeesh Vasu, Venkatesh Reddy Maligireddy, A. N. Rajagopalan
Blind motion deblurring methods are primarily responsible for recovering an accurate estimate of the blur kernel.
no code implementations • CVPR 2018 • M. R. Mahesh Mohan, A. N. Rajagopalan
Consequently, blind deblurring of any single subaperture image elegantly paves the way for cost-effective non-blind deblurring of the other subaperture images.
1 code implementation • CVPR 2019 • Kuldeep Purohit, Anshul Shah, A. N. Rajagopalan
This network extracts embedded motion information from the blurred image to generate a sharp video in conjunction with the trained recurrent video decoder.
Ranked #43 on Image Deblurring on GoPro (using extra training data)
no code implementations • ICCV 2017 • Mahesh Mohan M. R., A. N. Rajagopalan, Gunasekaran Seetharaman
Most present-day imaging devices are equipped with CMOS sensors.
no code implementations • ICCV 2017 • T. M. Nimisha, Akash Kumar Singh, A. N. Rajagopalan
In this paper, we investigate deep neural networks for blind motion deblurring.
no code implementations • CVPR 2017 • Subeesh Vasu, A. N. Rajagopalan
In this work, we investigate the relation between the edge profiles present in a motion blurred image and the underlying camera motion responsible for causing the motion blur.
no code implementations • CVPR 2017 • Vijay Rengarajan, Yogesh Balaji, A. N. Rajagopalan
Our single-image correction method fares well even operating in a frame-by-frame manner against video-based methods and performs better than scene-specific correction schemes even under challenging situations.
no code implementations • ICCV 2015 • Abhijith Punnappurath, Vijay Rengarajan, A. N. Rajagopalan
But CMOS sensors that have increasingly started to replace their more expensive CCD counterparts in many applications do not respect this assumption if there is a motion of the camera relative to the scene during the exposure duration of an image because of the row-wise acquisition mechanism.