End-To-End multi-modal sensors fusion system for urban automated driving

In this paper, we present a novel framework for urban automated driving based on multi-modal sensors; LiDAR and Camera. Environment perception through sensors fusion is key to successful deployment of automated driving systems, especially in complex urban areas. Our hypothesis is that a well designed deep neural network is able to end-to-end learn a driving policy that fuses LiDAR and Camera sensory input, achieving the best out of both. In order to improve the generalization and robustness of the learned policy, semantic segmentation on camera is applied, in addition to applying our new LiDAR post processing method; Polar Grid Mapping (PGM). The system is evaluated on the recently released urban car simulator, CARLA. The evaluation is measured according to the generalization performance from one environment to another. The experimental results show that the best performance is achieved by fusing the PGM and semantic segmentation.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods