Search Results for author: Ganesh Iyer

Found 5 papers, 3 papers with code

Mesh Strikes Back: Fast and Efficient Human Reconstruction from RGB videos

no code implementations15 Mar 2023 Rohit Jena, Pratik Chaudhari, James Gee, Ganesh Iyer, Siddharth Choudhary, Brandon M. Smith

Human reconstruction and synthesis from monocular RGB videos is a challenging problem due to clothing, occlusion, texture discontinuities and sharpness, and framespecific pose changes.

Novel View Synthesis

ConceptFusion: Open-set Multimodal 3D Mapping

1 code implementation14 Feb 2023 Krishna Murthy Jatavallabhula, Alihusein Kuwajerwala, Qiao Gu, Mohd Omama, Tao Chen, Alaa Maalouf, Shuang Li, Ganesh Iyer, Soroush Saryazdi, Nikhil Keetha, Ayush Tewari, Joshua B. Tenenbaum, Celso Miguel de Melo, Madhava Krishna, Liam Paull, Florian Shkurti, Antonio Torralba

ConceptFusion leverages the open-set capabilities of today's foundation models pre-trained on internet-scale data to reason about concepts across modalities such as natural language, images, and audio.

3D geometry Autonomous Driving +2

gradSLAM: Automagically differentiable SLAM

1 code implementation23 Oct 2019 Krishna Murthy Jatavallabhula, Soroush Saryazdi, Ganesh Iyer, Liam Paull

Blending representation learning approaches with simultaneous localization and mapping (SLAM) systems is an open question, because of their highly modular and complex nature.

Open-Ended Question Answering Representation Learning +1

Geometric Consistency for Self-Supervised End-to-End Visual Odometry

no code implementations11 Apr 2018 Ganesh Iyer, J. Krishna Murthy, Gunshi Gupta, K. Madhava Krishna, Liam Paull

We show that using a noisy teacher, which could be a standard VO pipeline, and by designing a loss term that enforces geometric consistency of the trajectory, we can train accurate deep models for VO that do not require ground-truth labels.

Visual Odometry

CalibNet: Geometrically Supervised Extrinsic Calibration using 3D Spatial Transformer Networks

2 code implementations22 Mar 2018 Ganesh Iyer, R. Karnik Ram., J. Krishna Murthy, K. Madhava Krishna

During training, the network only takes as input a LiDAR point cloud, the corresponding monocular image, and the camera calibration matrix K. At train time, we do not impose direct supervision (i. e., we do not directly regress to the calibration parameters, for example).

Camera Calibration Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.