no code implementations • ECCV 2020 • Mark Sheinin, Dinesh N. Reddy, Matthew O’Toole, Srinivasa G. Narasimhan
Thus, our system is able to achieve high-speed and high-accuracy 2D positioning of light sources and 3D scanning of scenes.
1 code implementation • 25 Aug 2022 • Bowei Chen, Tiancheng Zhi, Martial Hebert, Srinivasa G. Narasimhan
To address these challenges, we learn a neural implicit representation using a coordinate-based MLP with single image optimization.
no code implementations • CVPR 2022 • Mark Sheinin, Dorian Chan, Matthew O'Toole, Srinivasa G. Narasimhan
Visual vibrometry is a highly useful tool for remote capture of audio, as well as the physical properties of materials, human heart rate, and more.
1 code implementation • CVPR 2022 • N. Dinesh Reddy, Robert Tamburo, Srinivasa G. Narasimhan
Labeled real data of occlusions is scarce (even in large datasets) and synthetic data leaves a domain gap, making it hard to explicitly model and learn occlusions.
Ranked #1 on
Amodal Instance Segmentation
on WALT
no code implementations • CVPR 2022 • Dorian Chan, Srinivasa G. Narasimhan, Matthew O'Toole
Light curtain systems are designed for detecting the presence of objects within a user-defined 3D region of space, which has many applications across vision and robotics.
no code implementations • 8 Jul 2021 • Siddharth Ancha, Gaurav Pathak, Srinivasa G. Narasimhan, David Held
We use light curtains to estimate the safety envelope of a scene: a hypothetical surface that separates the robot from all obstacles.
no code implementations • CVPR 2021 • Yaadhav Raaj, Siddharth Ancha, Robert Tamburo, David Held, Srinivasa G. Narasimhan
Active sensing through the use of Adaptive Depth Sensors is a nascent field, with potential in areas such as Advanced driver-assistance systems (ADAS).
no code implementations • ECCV 2020 • Siddharth Ancha, Yaadhav Raaj, Peiyun Hu, Srinivasa G. Narasimhan, David Held
Most real-world 3D sensors such as LiDARs perform fixed scans of the entire environment, while being decoupled from the recognition system that processes the sensor data.
no code implementations • ECCV 2020 • Tiancheng Zhi, Christoph Lassner, Tony Tung, Carsten Stoll, Srinivasa G. Narasimhan, Minh Vo
We present TexMesh, a novel approach to reconstruct detailed human meshes with high-resolution full-body texture from RGB-D video.
no code implementations • 24 Jul 2020 • Minh Vo, Yaser Sheikh, Srinivasa G. Narasimhan
The triangulation constraint, however, is invalid for moving points captured in multiple unsynchronized videos and bundle adjustment is not designed to estimate the temporal alignment between cameras.
no code implementations • ECCV 2018 • Jian Wang, Joseph Bartels, William Whittaker, Aswin C. Sankaranarayanan, Srinivasa G. Narasimhan
A vehicle on a road or a robot in the field does not need a full-featured 3D depth sensor to detect potential collisions or monitor its blind spot.
no code implementations • CVPR 2018 • Tiancheng Zhi, Bernardo R. Pires, Martial Hebert, Srinivasa G. Narasimhan
Often, multiple cameras are used for cross-spectral imaging, thus requiring image alignment, or disparity estimation in a stereo setting.
1 code implementation • CVPR 2018 • N. Dinesh Reddy, Minh Vo, Srinivasa G. Narasimhan
In this work, we develop a framework to fuse both the single-view feature tracks and multi-view detected part locations to significantly improve the detection, localization and reconstruction of moving vehicles, even in the presence of strong occlusions.
no code implementations • CVPR 2017 • Chao Liu, Srinivasa G. Narasimhan, Artur W. Dubrawski
For macro-scale, we evaluate our method on scenes with complex 3D thin structures such as tree branches and grass.
no code implementations • CVPR 2017 • Chia-Yin Tsai, Kiriakos N. Kutulakos, Srinivasa G. Narasimhan, Aswin C. Sankaranarayanan
In this paper, we propose a new approach for NLOS imaging by studying the properties of first-returning photons from three-bounce light paths.
no code implementations • CVPR 2016 • Minh Vo, Srinivasa G. Narasimhan, Yaser Sheikh
In this paper, we present a spatiotemporal bundle adjustment approach that jointly optimizes four coupled sub-problems: estimating camera intrinsics and extrinsics, triangulating 3D static points, as well as subframe temporal alignment between cameras and estimating 3D trajectories of dynamic points.
no code implementations • CVPR 2016 • Gyeongmin Choe, Srinivasa G. Narasimhan, In So Kweon
Near-Infrared (NIR) images of most materials exhibit less texture or albedo variations making them beneficial for vision tasks such as intrinsic image decomposition and structured light depth estimation.