Autonomous driving is the task of driving a vehicle without human conduction.
One such example is autonomous driving, which often relies on deep learning for perception.
To retrieve a target image from the database, the query image is first encoded using the encoder belonging to the query domain to obtain a domain-invariant feature vector.
SalsaNet segments the road, i. e. drivable free-space, and vehicles in the scene by employing the Bird-Eye-View (BEV) image projection of the point cloud.
With the increasing global popularity of self-driving cars, there is an immediate need for challenging real-world datasets for benchmarking and training various computer vision tasks such as 3D object detection.
Autonomous driving is a dynamically growing field of research, where quality and amount of experimental data is critical.
Multi-sensor perception is crucial to ensure the reliability and accuracy in autonomous driving system, while multi-object tracking (MOT) improves that by tracing sequential movement of dynamic objects.
Trajectory modelling had been the principal research area for understanding and anticipating human behaviour.
This report presents our method which wins the nuScenes3D Detection Challenge  held in Workshop on Autonomous Driving(WAD, CVPR 2019).
SOTA for 3D Object Detection on nuScenes
In this paper, we propose a new neural network, the Fine-Grained Segmentation Network (FGSN), that can be used to provide image segmentations with a larger number of labels and can be trained in a self-supervised fashion.