no code implementations • 13 Mar 2025 • Avinash Paliwal, Xilong Zhou, Wei Ye, Jinhui Xiong, Rakesh Ranjan, Nima Khademi Kalantari
We demonstrate that by separating the process into two tasks and addressing them with the repair and inpainting models, we produce results with detailed textures in both visible and missing regions that outperform state-of-the-art approaches on a diverse set of scenes with extremely sparse inputs.
1 code implementation • 6 Dec 2024 • Avinash Paliwal, Xilong Zhou, Andrii Tsarov, Nima Khademi Kalantari
In this paper, we present PanoDreamer, a novel method for producing a coherent 360{\deg} 3D scene from a single input image.
no code implementations • 18 Nov 2024 • Libing Zeng, Nima Khademi Kalantari
Following this observation, we conduct an analysis and demonstrate that the bias appears not only in band 0 (DC term), but also in the other bands of the estimated SH coefficients.
1 code implementation • 28 Mar 2024 • Avinash Paliwal, Wei Ye, Jinhui Xiong, Dmytro Kotovenko, Rakesh Ranjan, Vikas Chandra, Nima Khademi Kalantari
The field of 3D reconstruction from images has rapidly evolved in the past few years, first with the introduction of Neural Radiance Field (NeRF) and more recently with 3D Gaussian Splatting (3DGS).
1 code implementation • 19 Sep 2023 • Avinash Paliwal, Brandon Nguyen, Andrii Tsarov, Nima Khademi Kalantari
To address this major problem, we make a key observation that the process of synthesizing novel views requires changing the shading of the pixels based on the novel camera, and moving them to appropriate locations.
no code implementations • 20 May 2023 • Xilong Zhou, Miloš Hašan, Valentin Deschaintre, Paul Guerrero, Yannick Hold-Geoffroy, Kalyan Sunkavalli, Nima Khademi Kalantari
Instead, we train a generator for a neural material representation that is rendered with a learned relighting module to create arbitrarily lit RGB images; these are compared against real photos using a discriminator.
1 code implementation • CVPR 2023 • Avinash Paliwal, Andrii Tsarov, Nima Khademi Kalantari
In this paper, we propose an approach for view-time interpolation of stereo videos.
no code implementations • CVPR 2023 • Libing Zeng, Lele Chen, Wentao Bao, Zhong Li, Yi Xu, Junsong Yuan, Nima Khademi Kalantari
Accurate facial landmark detection on wild images plays an essential role in human-computer interaction, entertainment, and medical applications.
1 code implementation • 27 Sep 2022 • Pedro Figueirêdo, Avinash Paliwal, Nima Khademi Kalantari
To do this, we propose to encode the bidirectional flows into a coordinate-based network, powered by a hypernetwork, to obtain a continuous representation of the flow across time.
1 code implementation • 4 Mar 2021 • Avinash Paliwal, Libing Zeng, Nima Khademi Kalantari
We propose to do this by first explicitly aligning the neighboring frames to the current frame using a convolutional neural network (CNN).
Ranked #2 on
Video Denoising
on CRVD
no code implementations • 15 May 2020 • Marcel Santana Santos, Tsang Ing Ren, Nima Khademi Kalantari
Since the number of HDR images for training is limited, we propose to train our system in two stages.
Image and Video Processing Graphics
1 code implementation • 27 Feb 2020 • Avinash Paliwal, Nima Khademi Kalantari
In this paper, we address this problem using two video streams as input; an auxiliary video with high frame rate and low spatial resolution, providing temporal information, in addition to the standard main video with low frame rate and high spatial resolution.
1 code implementation • 2 May 2019 • Ben Mildenhall, Pratul P. Srinivasan, Rodrigo Ortiz-Cayon, Nima Khademi Kalantari, Ravi Ramamoorthi, Ren Ng, Abhishek Kar
We present a practical and robust deep learning solution for capturing and rendering novel views of complex real world scenes for virtual exploration.
no code implementations • 30 Jul 2018 • Sai Bi, Nima Khademi Kalantari, Ravi Ramamoorthi
Experimental results show that our approach produces better results than the state-of-the-art DL and non-DL methods on various synthetic and real datasets both visually and numerically.
1 code implementation • 8 May 2017 • Ting-Chun Wang, Jun-Yan Zhu, Nima Khademi Kalantari, Alexei A. Efros, Ravi Ramamoorthi
Given a 3 fps light field sequence and a standard 30 fps 2D video, our system can then generate a full light field video at 30 fps.
no code implementations • 9 Sep 2016 • Nima Khademi Kalantari, Ting-Chun Wang, Ravi Ramamoorthi
Specifically, we propose a novel learning-based approach to synthesize new views from a sparse set of input views.