1 code implementation • 23 Feb 2017 • Jie Li, Katherine A. Skinner, Ryan M. Eustice, Matthew Johnson-Roberson
Due to the depth-dependent water column effects inherent to underwater environments, we show that our end-to-end network implicitly learns a coarse depth estimate of the underwater scene from monocular underwater images.
no code implementations • 15 Dec 2017 • Ross Hartley, Josh Mangelson, Lu Gan, Maani Ghaffari Jadidi, Jeffrey M. Walls, Ryan M. Eustice, Jessy W. Grizzle
We introduce forward kinematic factors and preintegrated contact factors into a factor graph framework that can be incrementally solved in real-time.
Robotics
no code implementations • 20 Mar 2018 • Ross Hartley, Maani Ghaffari Jadidi, Lu Gan, Jiunn-Kai Huang, Jessy W. Grizzle, Ryan M. Eustice
The factor graph framework is a convenient modeling technique for robotic state estimation where states are represented as nodes, and measurements are modeled as factors.
Robotics
2 code implementations • 26 May 2018 • Ross Hartley, Maani Ghaffari Jadidi, Jessy W. Grizzle, Ryan M. Eustice
On the basis of the theory of invariant observer design by Barrau and Bonnabel, and in particular, the Invariant EKF (InEKF), we show that the error dynamics of the point contact-inertial system follows a log-linear autonomous differential equation; hence, the observable state variables can be rendered convergent with a domain of attraction that is independent of the system's trajectory.
Robotics
1 code implementation • 3 Apr 2019 • Maani Ghaffari, William Clark, Anthony Bloch, Ryan M. Eustice, Jessy W. Grizzle
This paper reports on a novel formulation and evaluation of visual odometry from RGB-D images.
1 code implementation • 19 Apr 2019 • Ross Hartley, Maani Ghaffari, Ryan M. Eustice, Jessy W. Grizzle
This filter combines contact-inertial dynamics with forward kinematic corrections to estimate pose and velocity along with all current contact points.
Robotics
2 code implementations • 10 Sep 2019 • Lu Gan, Ray Zhang, Jessy W. Grizzle, Ryan M. Eustice, Maani Ghaffari
This paper develops a Bayesian continuous 3D semantic occupancy map from noisy point cloud measurements.
Robotics
1 code implementation • 1 Oct 2019 • Tzu-Yuan Lin, William Clark, Ryan M. Eustice, Jessy W. Grizzle, Anthony Bloch, Maani Ghaffari
In this paper, we extend the recently developed continuous visual odometry framework for RGB-D cameras to an adaptive framework via online hyperparameter learning.
1 code implementation • 2 Dec 2019 • Xi Lin, Dingyi Sun, Tzu-Yuan Lin, Ryan M. Eustice, Maani Ghaffari
The experimental evaluations using publicly available RGB-D benchmarks show that the developed keyframe selection technique using continuous visual odometry outperforms its robust dense (and direct) visual odometry equivalent.
1 code implementation • 2 Feb 2020 • Sahib Singh Dhanjal, Maani Ghaffari, Ryan M. Eustice
The proposed algorithm can globally localize and track a smartphone (or robot) with a priori unknown location, and with a semi-accurate prior map (error within 0. 8 m) of the WiFi Access Points (AP).
Robotics Signal Processing
2 code implementations • 21 Mar 2020 • Minghan Zhu, Maani Ghaffari, Yuanxin Zhong, Pingping Lu, Zhong Cao, Ryan M. Eustice, Huei Peng
In contrast to the current point-to-point loss evaluation approach, the proposed 3D loss treats point clouds as continuous objects; therefore, it compensates for the lack of dense ground truth depth due to LIDAR's sparsity measurements.
1 code implementation • 10 Nov 2020 • Ray Zhang, Tzu-Yuan Lin, Chien Erh Lin, Steven A. Parkison, William Clark, Jessy W. Grizzle, Ryan M. Eustice, Maani Ghaffari
This paper reports on a novel nonparametric rigid point cloud registration framework that jointly integrates geometric and semantic measurements such as color or semantic labels into the alignment process and does not require explicit data association.