NEAT: Neural Attention Fields for End-to-End Autonomous Driving

Efficient reasoning about the semantic, spatial, and temporal structure of a scene is a crucial prerequisite for autonomous driving. We present NEural ATtention fields (NEAT), a novel representation that enables such reasoning for end-to-end imitation learning models. NEAT is a continuous function which maps locations in Bird's Eye View (BEV) scene coordinates to waypoints and semantics, using intermediate attention maps to iteratively compress high-dimensional 2D image features into a compact representation. This allows our model to selectively attend to relevant regions in the input while ignoring information irrelevant to the driving task, effectively associating the images with the BEV representation. In a new evaluation setting involving adverse environmental conditions and challenging scenarios, NEAT outperforms several strong baselines and achieves driving scores on par with the privileged CARLA expert used to generate its training data. Furthermore, visualizing the attention maps for models with NEAT intermediate representations provides improved interpretability.

PDF Abstract ICCV 2021 PDF ICCV 2021 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
CARLA longest6 CARLA Neural Attention Fields (NEAT) Driving Score 24 # 16
Route Completion 62 # 17
Infraction Score 0.71 # 5
Autonomous Driving CARLA Leaderboard NEAT Driving Score 21.83 # 15
Route Completion 41.71 # 16
Infraction penalty 0.65 # 9
Novel View Synthesis X3D NeAT PSNR 36.01 # 4
SSIM 0.9638 # 6

Methods