In this paper we introduce plan2vec, an unsupervised representation learning approach that is inspired by reinforcement learning.
We present a reinforcement learning (RL) framework to synthesize a control policy from a given linear temporal logic (LTL) specification in an unknown stochastic environment that can be modeled as a Markov Decision Process (MDP).
Those minima are important to visualize to let a user guide, prevent or predict motions.
Many problems in computer vision and robotics can be phrased as non-linear least squares optimization problems represented by factor graphs, for example, simultaneous localization and mapping (SLAM), structure from motion (SfM), motion planning, and control.
We validate MPNet against gold-standard and state-of-the-art planning methods in a variety of problems from 2D to 7D robot configuration spaces in challenging and cluttered environments, with results showing significant and consistently stronger performance metrics, and motivating neural planning in general as a modern strategy for solving motion planning problems efficiently.
We introduce BayesSim, a framework for robotics simulations allowing a full Bayesian treatment for the parameters of the simulator.
In this paper, we focus on obtaining 2D and 3D labels, as well as track IDs for objects on the road with the help of a novel 3D Bounding Box Annotation Toolbox (3D BAT).
Traditional motion planning methods suffer from several drawbacks in terms of optimality, efficiency and generalization capability.