End-to-end Learning of Driving Models from Large-scale Video Datasets

CVPR 2017  ·  Huazhe Xu, Yang Gao, Fisher Yu, Trevor Darrell ·

Robust perception-action models should be learned from training data with diverse visual appearances and realistic behaviors, yet current approaches to deep visuomotor policy learning have been generally limited to in-situ models learned from a single vehicle or a simulation environment. We advocate learning a generic vehicle motion model from large scale crowd-sourced video data, and develop an end-to-end trainable architecture for learning to predict a distribution over future vehicle egomotion from instantaneous monocular camera observations and previous vehicle state. Our model incorporates a novel FCN-LSTM architecture, which can be learned from large-scale crowd-sourced vehicle action data, and leverages available scene segmentation side tasks to improve performance under a privileged learning paradigm.

PDF Abstract CVPR 2017 PDF CVPR 2017 Abstract

Datasets


Introduced in the Paper:

Berkeley DeepDrive Video

Used in the Paper:

comma.ai

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here