Occupancy Anticipation for Efficient Exploration and Navigation

State-of-the-art navigation methods leverage a spatial memory to generalize to new environments, but their occupancy maps are limited to capturing the geometric structures directly observed by the agent. We propose occupancy anticipation, where the agent uses its egocentric RGB-D observations to infer the occupancy state beyond the visible regions. In doing so, the agent builds its spatial awareness more rapidly, which facilitates efficient exploration and navigation in 3D environments. By exploiting context in both the egocentric views and top-down maps our model successfully anticipates a broader map of the environment, with performance significantly better than strong baselines. Furthermore, when deployed for the sequential decision-making tasks of exploration and navigation, our model outperforms state-of-the-art methods on the Gibson and Matterport3D datasets. Our approach is the winning entry in the 2020 Habitat PointNav Challenge. Project page: http://vision.cs.utexas.edu/projects/occupancy_anticipation/

PDF Abstract ECCV 2020 PDF ECCV 2020 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Robot Navigation Habitat 2020 Point Nav test-std OccupancyAnticipation SPL 0.22 # 3
SOFT_SPL 0.473 # 5
DISTANCE_TO_GOAL 2.567 # 6
SUCCESS 0.289 # 3

Methods


No methods listed for this paper. Add relevant methods here