Efficient reasoning about the semantic, spatial, and temporal structure of a scene is a crucial prerequisite for autonomous driving.
Ranked #4 on Autonomous Driving on CARLA Leaderboard
How should representations from complementary sensors be integrated for autonomous driving?
Ranked #1 on Autonomous Driving on Town05 Long
Perceiving the world in terms of objects and tracking them through time is a crucial prerequisite for reasoning and scene understanding.
Beyond label efficiency, we find several additional training benefits when leveraging visual abstractions, such as a significant reduction in the variance of the learned policy when compared to state-of-the-art end-to-end driving models.
We have built a scalable production system for active learning in the domain of autonomous driving.
Semantic segmentation with Convolutional Neural Networks is a memory-intensive task due to the high spatial resolution of feature maps and output predictions.
In this paper, we propose to scale up ensemble Active Learning (AL) methods to perform acquisition at a large scale (10k to 500k samples at a time).
With our design, the network progressively learns features specific to the target domain using annotation from only the source domain.
Annotating the right data for training deep neural networks is an important challenge.
In this paper, we introduce Deep Probabilistic Ensembles (DPEs), a scalable technique that uses a regularized ensemble to approximate a deep Bayesian Neural Network (BNN).
We propose Attentive Regularization (AR), a method to constrain the activation maps of kernels in Convolutional Neural Networks (CNNs) to specific regions of interest (ROIs).