Browse > Robots > Robot Navigation

Robot Navigation

10 papers with code · Robots

State-of-the-art leaderboards

No evaluation results yet. Help compare methods by submit evaluation metrics.

Greatest papers with code

Depth Prediction Without the Sensors: Leveraging Structure for Unsupervised Learning from Monocular Videos

15 Nov 2018tensorflow/models

In this work we address unsupervised learning of scene depth and robot ego-motion where supervision is provided by monocular videos, as cameras are the cheapest, least restrictive and most ubiquitous sensor for robotics. We propose a novel approach which produces higher quality results, is able to model moving objects and is shown to transfer across data domains, e.g. from outdoors to indoor scenes.

DEPTH ESTIMATION ROBOT NAVIGATION

Online Temporal Calibration for Monocular Visual-Inertial Systems

2 Aug 2018HKUST-Aerial-Robotics/VINS-Mono

Visual and inertial fusion is a popular technology for 6-DOF state estimation in recent years. Time instants at which different sensors' measurements are recorded are of crucial importance to the system's robustness and accuracy.

AUTONOMOUS DRIVING ROBOT NAVIGATION SENSOR FUSION TIME OFFSET CALIBRATION

SkiMap: An Efficient Mapping Framework for Robot Navigation

19 Apr 2017m4nh/skimap_ros

We present a novel mapping framework for robot navigation which features a multi-level querying system capable to obtain rapidly representations as diverse as a 3D voxel grid, a 2.5D height map and a 2D occupancy grid. These are inherently embedded into a memory and time efficient core data structure organized as a Tree of SkipLists.

ROBOT NAVIGATION

Self-supervised Deep Reinforcement Learning with Generalized Computation Graphs for Robot Navigation

29 Sep 2017gkahn13/gcg

To address the need to learn complex policies with few samples, we propose a generalized computation graph that subsumes value-based model-free methods and model-based methods, with specific instantiations interpolating between model-free and model-based. We also evaluate our approach on a real-world RC car and show it can learn to navigate through a complex indoor environment with a few hours of fully autonomous, self-supervised training.

Q-LEARNING ROBOT NAVIGATION

Crowd-Robot Interaction: Crowd-aware Robot Navigation with Attention-based Deep Reinforcement Learning

24 Sep 2018vita-epfl/DyNav

Mobility in an effective and socially-compliant manner is an essential yet challenging task for robots operating in crowded spaces. We propose to (i) rethink pairwise interactions with a self-attention mechanism, and (ii) jointly model Human-Robot as well as Human-Human interactions in the deep reinforcement learning framework.

HUMAN DYNAMICS ROBOT NAVIGATION

Correlation Flow: Robust Optical Flow Using Kernel Cross-Correlators

20 Feb 2018wang-chen/KCC

Robust velocity and position estimation is crucial for autonomous robot navigation. Correlation flow is able to provide reliable and accurate velocity estimation and is robust to motion blur.

AUTONOMOUS NAVIGATION OPTICAL FLOW ESTIMATION ROBOT NAVIGATION

A Fast Ellipse Detector Using Projective Invariant Pruning

26 Aug 2016dlut-dimt/ellipse-detector

Detecting elliptical objects from an image is a central task in robot navigation and industrial diagnosis where the detection time is always a critical issue. Existing methods are hardly applicable to these real-time scenarios of limited hardware resource due to the huge number of fragment candidates (edges or arcs) for fitting ellipse equations.

ROBOT NAVIGATION

Composable Action-Conditioned Predictors: Flexible Off-Policy Learning for Robot Navigation

16 Oct 2018gkahn13/CAPs

However, standard reinforcement learning approaches learn separate task-specific policies and assume the reward function for each task is known a priori. We show that a simulated robotic car and a real-world RC car can gather data and train fully autonomously without any human-provided labels beyond those needed to train the detectors, and then at test-time be able to accomplish a variety of different tasks.

ROBOT NAVIGATION

Learning a Representation Map for Robot Navigation using Deep Variational Autoencoder

5 Jul 2018augustkx/VAE_learning-a-representation-for-navigation

For the navigation problem, we map the starting image and destination image to the latent space, then optimize a path on the learned manifold connecting the two points, and finally map the path back through decoder to a sequence of images. Such a route could be used for navigation with computer vision techniques, i.e. a robot could follow the image sequence from starting location to destination in the environment step by step.

ROBOT NAVIGATION