Convolutional Neural Networks (CNN) have been successfully applied to
autonomous driving tasks, many in an end-to-end manner. Previous end-to-end
steering control methods take an image or an image sequence as the input and
directly predict the steering angle with CNN. Although single task learning on
steering angles has reported good performances, the steering angle alone is not
sufficient for vehicle control. In this work, we propose a multi-task learning
framework to predict the steering angle and speed control simultaneously in an
end-to-end manner. Since it is nontrivial to predict accurate speed values with
only visual inputs, we first propose a network to predict discrete speed
commands and steering angles with image sequences. Moreover, we propose a
multi-modal multi-task network to predict speed values and steering angles by
taking previous feedback speeds and visual recordings as inputs. Experiments
are conducted on the public Udacity dataset and a newly collected SAIC dataset.
Results show that the proposed model predicts steering angles and speed values
accurately. Furthermore, we improve the failure data synthesis methods to solve
the problem of error accumulation in real road tests.