Search Results for author: Sadegh Rabiee

Found 5 papers, 0 papers with code

High-Speed Accurate Robot Control using Learned Forward Kinodynamics and Non-linear Least Squares Optimization

no code implementations16 Jun 2022 Pranav Atreya, Haresh Karnan, Kavan Singh Sikand, Xuesu Xiao, Sadegh Rabiee, Joydeep Biswas

However, the types of control problems these approaches can be applied to are limited only to that of following pre-computed kinodynamically feasible trajectories.

VI-IKD: High-Speed Accurate Off-Road Navigation using Learned Visual-Inertial Inverse Kinodynamics

no code implementations30 Mar 2022 Haresh Karnan, Kavan Singh Sikand, Pranav Atreya, Sadegh Rabiee, Xuesu Xiao, Garrett Warnell, Peter Stone, Joydeep Biswas

In this paper, we hypothesize that to enable accurate high-speed off-road navigation using a learned IKD model, in addition to inertial information from the past, one must also anticipate the kinodynamic interactions of the vehicle with the terrain in the future.

Competence-Aware Path Planning via Introspective Perception

no code implementations28 Sep 2021 Sadegh Rabiee, Connor Basich, Kyle Hollins Wray, Shlomo Zilberstein, Joydeep Biswas

First, perception errors are learned in a model-free and location-agnostic setting via introspective perception prior to deployment in novel environments.

Visual Representation Learning for Preference-Aware Path Planning

no code implementations18 Sep 2021 Kavan Singh Sikand, Sadegh Rabiee, Adam Uccello, Xuesu Xiao, Garrett Warnell, Joydeep Biswas

We introduce Visual Representation Learning for Preference-Aware Path Planning (VRL-PAP), an alternative approach that overcomes all three limitations: VRL-PAP leverages unlabeled human demonstrations of navigation to autonomously generate triplets for learning visual representations of terrain that are viewpoint invariant and encode terrain types in a continuous representation space.

Representation Learning Semantic Segmentation

IV-SLAM: Introspective Vision for Simultaneous Localization and Mapping

no code implementations6 Aug 2020 Sadegh Rabiee, Joydeep Biswas

Existing solutions to visual simultaneous localization and mapping (V-SLAM) assume that errors in feature extraction and matching are independent and identically distributed (i. i. d), but this assumption is known to not be true -- features extracted from low-contrast regions of images exhibit wider error distributions than features from sharp corners.

Simultaneous Localization and Mapping

Cannot find the paper you are looking for? You can Submit a new open access paper.