Search Results for author: Abhishek Kar

Found 20 papers, 4 papers with code

Category-Specific Object Reconstruction from a Single Image

no code implementations CVPR 2015 Abhishek Kar, Shubham Tulsiani, João Carreira, Jitendra Malik

Object reconstruction from a single image -- in the wild -- is a problem where we can make progress and get meaningful results today.

Object object-detection +2

Amodal Completion and Size Constancy in Natural Scenes

no code implementations ICCV 2015 Abhishek Kar, Shubham Tulsiani, João Carreira, Jitendra Malik

We consider the problem of enriching current object detection systems with veridical object sizes and relative depth estimates from a single image.

Object object-detection +3

Shape and Symmetry Induction for 3D Objects

no code implementations24 Nov 2015 Shubham Tulsiani, Abhishek Kar, Qi-Xing Huang, João Carreira, Jitendra Malik

Actions as simple as grasping an object or navigating around it require a rich understanding of that object's 3D shape from a given viewpoint.

General Classification Object

Learning a Multi-View Stereo Machine

1 code implementation NeurIPS 2017 Abhishek Kar, Christian Häne, Jitendra Malik

We thoroughly evaluate our approach on the ShapeNet dataset and demonstrate the benefits over classical approaches as well as recent learning based methods.

3D Reconstruction

Learning Independent Object Motion from Unlabelled Stereoscopic Videos

no code implementations CVPR 2019 Zhe Cao, Abhishek Kar, Christian Haene, Jitendra Malik

Unlike prior learning based work which has focused on predicting dense pixel-wise optical flow field and/or a depth map for each image, we propose to predict object instance specific 3D scene flow maps and instance masks from which we are able to derive the motion direction and speed for each object instance.

Object Optical Flow Estimation

Local Light Field Fusion: Practical View Synthesis with Prescriptive Sampling Guidelines

1 code implementation2 May 2019 Ben Mildenhall, Pratul P. Srinivasan, Rodrigo Ortiz-Cayon, Nima Khademi Kalantari, Ravi Ramamoorthi, Ren Ng, Abhishek Kar

We present a practical and robust deep learning solution for capturing and rendering novel views of complex real world scenes for virtual exploration.

Novel View Synthesis

SLIDE: Single Image 3D Photography with Soft Layering and Depth-aware Inpainting

no code implementations ICCV 2021 Varun Jampani, Huiwen Chang, Kyle Sargent, Abhishek Kar, Richard Tucker, Michael Krainin, Dominik Kaeser, William T. Freeman, David Salesin, Brian Curless, Ce Liu

We present SLIDE, a modular and unified system for single image 3D photography that uses a simple yet effective soft layering strategy to better preserve appearance details in novel views.

Image Matting

Learned Monocular Depth Priors in Visual-Inertial Initialization

no code implementations20 Apr 2022 Yunwen Zhou, Abhishek Kar, Eric Turner, Adarsh Kowdle, Chao X. Guo, Ryan C. DuToit, Konstantine Tsotsos

Visual-inertial odometry (VIO) is the pose estimation backbone for most AR/VR and autonomous robotic systems today, in both academia and industry.

Pose Estimation

DC2: Dual-Camera Defocus Control by Learning To Refocus

no code implementations CVPR 2023 Hadi AlZayer, Abdullah Abuolaim, Leung Chun Chan, Yang Yang, Ying Chen Lou, Jia-Bin Huang, Abhishek Kar

Smartphone cameras today are increasingly approaching the versatility and quality of professional cameras through a combination of hardware and software advancements.

Deblurring

Monocular Depth Estimation using Diffusion Models

no code implementations28 Feb 2023 Saurabh Saxena, Abhishek Kar, Mohammad Norouzi, David J. Fleet

To cope with the limited availability of data for supervised training, we leverage pre-training on self-supervised image-to-image translation tasks.

Ranked #19 on Monocular Depth Estimation on NYU-Depth V2 (using extra training data)

Denoising Image-to-Image Translation +3

ASIC: Aligning Sparse in-the-wild Image Collections

no code implementations ICCV 2023 Kamal Gupta, Varun Jampani, Carlos Esteves, Abhinav Shrivastava, Ameesh Makadia, Noah Snavely, Abhishek Kar

We present a self-supervised technique that directly optimizes on a sparse collection of images of a particular object/object category to obtain consistent dense correspondences across the collection.

Object

$\text{DC}^2$: Dual-Camera Defocus Control by Learning to Refocus

no code implementations6 Apr 2023 Hadi AlZayer, Abdullah Abuolaim, Leung Chun Chan, Yang Yang, Ying Chen Lou, Jia-Bin Huang, Abhishek Kar

Smartphone cameras today are increasingly approaching the versatility and quality of professional cameras through a combination of hardware and software advancements.

Deblurring

LU-NeRF: Scene and Pose Estimation by Synchronizing Local Unposed NeRFs

no code implementations ICCV 2023 Zezhou Cheng, Carlos Esteves, Varun Jampani, Abhishek Kar, Subhransu Maji, Ameesh Makadia

Consequently, there is growing interest in extending NeRF models to jointly optimize camera poses and scene representation, which offers an alternative to off-the-shelf SfM pipelines which have well-understood failure modes.

Pose Estimation

Accelerating Neural Field Training via Soft Mining

no code implementations29 Nov 2023 Shakiba Kheradmand, Daniel Rebain, Gopal Sharma, Hossam Isack, Abhishek Kar, Andrea Tagliasacchi, Kwang Moo Yi

We present an approach to accelerate Neural Field training by efficiently selecting sampling locations.

NeRFiller: Completing Scenes via Generative 3D Inpainting

no code implementations7 Dec 2023 Ethan Weber, Aleksander Hołyński, Varun Jampani, Saurabh Saxena, Noah Snavely, Abhishek Kar, Angjoo Kanazawa

In contrast to related works, we focus on completing scenes rather than deleting foreground objects, and our approach does not require tight 2D object masks or text.

3D Inpainting

SHINOBI: Shape and Illumination using Neural Object Decomposition via BRDF Optimization In-the-wild

no code implementations18 Jan 2024 Andreas Engelhardt, Amit Raj, Mark Boss, Yunzhi Zhang, Abhishek Kar, Yuanzhen Li, Deqing Sun, Ricardo Martin Brualla, Jonathan T. Barron, Hendrik P. A. Lensch, Varun Jampani

We present SHINOBI, an end-to-end framework for the reconstruction of shape, material, and illumination from object images captured with varying lighting, pose, and background.

Inverse Rendering Object

Cannot find the paper you are looking for? You can Submit a new open access paper.