Search Results for author: Ryan M. Eustice

Found 12 papers, 10 papers with code

WaterGAN: Unsupervised Generative Network to Enable Real-time Color Correction of Monocular Underwater Images

1 code implementation23 Feb 2017 Jie Li, Katherine A. Skinner, Ryan M. Eustice, Matthew Johnson-Roberson

Due to the depth-dependent water column effects inherent to underwater environments, we show that our end-to-end network implicitly learns a coarse depth estimate of the underwater scene from monocular underwater images.

Generative Adversarial Network

Legged Robot State-Estimation Through Combined Forward Kinematic and Preintegrated Contact Factors

no code implementations15 Dec 2017 Ross Hartley, Josh Mangelson, Lu Gan, Maani Ghaffari Jadidi, Jeffrey M. Walls, Ryan M. Eustice, Jessy W. Grizzle

We introduce forward kinematic factors and preintegrated contact factors into a factor graph framework that can be incrementally solved in real-time.

Robotics

Hybrid Contact Preintegration for Visual-Inertial-Contact State Estimation Using Factor Graphs

no code implementations20 Mar 2018 Ross Hartley, Maani Ghaffari Jadidi, Lu Gan, Jiunn-Kai Huang, Jessy W. Grizzle, Ryan M. Eustice

The factor graph framework is a convenient modeling technique for robotic state estimation where states are represented as nodes, and measurements are modeled as factors.

Robotics

Contact-Aided Invariant Extended Kalman Filtering for Legged Robot State Estimation

2 code implementations26 May 2018 Ross Hartley, Maani Ghaffari Jadidi, Jessy W. Grizzle, Ryan M. Eustice

On the basis of the theory of invariant observer design by Barrau and Bonnabel, and in particular, the Invariant EKF (InEKF), we show that the error dynamics of the point contact-inertial system follows a log-linear autonomous differential equation; hence, the observable state variables can be rendered convergent with a domain of attraction that is independent of the system's trajectory.

Robotics

Contact-Aided Invariant Extended Kalman Filtering for Robot State Estimation

1 code implementation19 Apr 2019 Ross Hartley, Maani Ghaffari, Ryan M. Eustice, Jessy W. Grizzle

This filter combines contact-inertial dynamics with forward kinematic corrections to estimate pose and velocity along with all current contact points.

Robotics

Bayesian Spatial Kernel Smoothing for ScalableDense Semantic Mapping

2 code implementations10 Sep 2019 Lu Gan, Ray Zhang, Jessy W. Grizzle, Ryan M. Eustice, Maani Ghaffari

This paper develops a Bayesian continuous 3D semantic occupancy map from noisy point cloud measurements.

Robotics

Adaptive Continuous Visual Odometry from RGB-D Images

1 code implementation1 Oct 2019 Tzu-Yuan Lin, William Clark, Ryan M. Eustice, Jessy W. Grizzle, Anthony Bloch, Maani Ghaffari

In this paper, we extend the recently developed continuous visual odometry framework for RGB-D cameras to an adaptive framework via online hyperparameter learning.

Visual Odometry

A Keyframe-based Continuous Visual SLAM for RGB-D Cameras via Nonparametric Joint Geometric and Appearance Representation

1 code implementation2 Dec 2019 Xi Lin, Dingyi Sun, Tzu-Yuan Lin, Ryan M. Eustice, Maani Ghaffari

The experimental evaluations using publicly available RGB-D benchmarks show that the developed keyframe selection technique using continuous visual odometry outperforms its robust dense (and direct) visual odometry equivalent.

Visual Odometry

DeepLocNet: Deep Observation Classification and Ranging Bias Regression for Radio Positioning Systems

1 code implementation2 Feb 2020 Sahib Singh Dhanjal, Maani Ghaffari, Ryan M. Eustice

The proposed algorithm can globally localize and track a smartphone (or robot) with a priori unknown location, and with a semi-accurate prior map (error within 0. 8 m) of the WiFi Access Points (AP).

Robotics Signal Processing

Monocular Depth Prediction through Continuous 3D Loss

2 code implementations21 Mar 2020 Minghan Zhu, Maani Ghaffari, Yuanxin Zhong, Pingping Lu, Zhong Cao, Ryan M. Eustice, Huei Peng

In contrast to the current point-to-point loss evaluation approach, the proposed 3D loss treats point clouds as continuous objects; therefore, it compensates for the lack of dense ground truth depth due to LIDAR's sparsity measurements.

Depth Estimation Depth Prediction

A New Framework for Registration of Semantic Point Clouds from Stereo and RGB-D Cameras

1 code implementation10 Nov 2020 Ray Zhang, Tzu-Yuan Lin, Chien Erh Lin, Steven A. Parkison, William Clark, Jessy W. Grizzle, Ryan M. Eustice, Maani Ghaffari

This paper reports on a novel nonparametric rigid point cloud registration framework that jointly integrates geometric and semantic measurements such as color or semantic labels into the alignment process and does not require explicit data association.

Point Cloud Registration

Cannot find the paper you are looking for? You can Submit a new open access paper.