no code implementations • 27 Sep 2022 • Advaith Venkatramanan Sethuraman, Manikandasriram Srinivasan Ramanagopal, Katherine A. Skinner
Underwater imaging is a critical task performed by marine robots for a wide range of applications including aquaculture, marine infrastructure inspection, and environmental monitoring.
no code implementations • 2 Sep 2022 • Alexandra Carlson, Manikandasriram Srinivasan Ramanagopal, Nathan Tseng, Matthew Johnson-Roberson, Ram Vasudevan, Katherine A. Skinner
Recent advances in neural radiance fields (NeRFs) achieve state-of-the-art novel view synthesis and facilitate dense estimation of scene properties.
1 code implementation • 8 Jun 2020 • Manikandasriram Srinivasan Ramanagopal, Zixu Zhang, Ram Vasudevan, Matthew Johnson-Roberson
To address this problem, this paper formulates reversing the effect of thermal inertia at a single pixel as a Least Absolute Shrinkage and Selection Operator (LASSO) problem which we can solve rapidly using a quadratic programming solver.
no code implementations • 7 May 2019 • Jun-ming Zhang, Manikandasriram Srinivasan Ramanagopal, Ram Vasudevan, Matthew Johnson-Roberson
An accurate depth map of the environment is critical to the safe operation of autonomous robots and vehicles.
no code implementations • 10 Sep 2018 • Wonhui Kim, Manikandasriram Srinivasan Ramanagopal, Charles Barto, Ming-Yuan Yu, Karl Rosaen, Nick Goumas, Ram Vasudevan, Matthew Johnson-Roberson
This paper presents a novel dataset titled PedX, a large-scale multimodal collection of pedestrians at complex urban intersections.
1 code implementation • 30 Jun 2017 • Manikandasriram Srinivasan Ramanagopal, Cyrus Anderson, Ram Vasudevan, Matthew Johnson-Roberson
We show that a state-of-the-art detector, tracker, and our classifier trained only on synthetic data can identify valid errors on KITTI tracking dataset with an average precision of 0. 94.
no code implementations • 22 Feb 2016 • Manikandasriram Srinivasan Ramanagopal, André Phu-Van Nguyen, Jerome Le Ny
This paper presents a strategy to guide a mobile ground robot equipped with a camera or depth sensor, in order to autonomously map the visible part of a bounded three-dimensional structure.