Autonomous vehicles is the task of making a vehicle that can guide itself without human conduction.
Such "in-the-tail" data is notoriously hard to observe, making both training and testing difficult.
Dense, robust and real-time computation of depth information from stereo-camera systems is a computationally demanding requirement for robotics, advanced driver assistance systems (ADAS) and autonomous vehicles.
2) One-click annotation: Instead of drawing 3D bounding boxes or point-wise labels, we simplify the annotation to just one click on the target object, and automatically generate the bounding box for the target.
However, we additionally propose a fusion method with RGB guidance from a monocular camera in order to leverage object information and to correct mistakes in the sparse input.
Given the ability to directly manipulate image pixels in the digital input space, an adversary can easily generate imperceptible perturbations to fool a Deep Neural Network (DNN) image classifier, as demonstrated in prior work.
Simulation is an appealing option for validating the safety of autonomous vehicles.
We design a Siamese tracker that encodes model and candidate shapes into a compact latent representation.