Autonomous Vehicles
536 papers with code • 1 benchmarks • 27 datasets
Autonomous vehicles is the task of making a vehicle that can guide itself without human conduction.
Many of the state-of-the-art results can be found at more general task pages such as 3D Object Detection and Semantic Segmentation.
( Image credit: GSNet: Joint Vehicle Pose and Shape Reconstruction with Geometrical and Scene-aware Supervision )
Libraries
Use these libraries to find Autonomous Vehicles models and implementationsDatasets
Subtasks
Most implemented papers
Accelerating 3D Deep Learning with PyTorch3D
We address these challenges by introducing PyTorch3D, a library of modular, efficient, and differentiable operators for 3D deep learning.
CenterFusion: Center-based Radar and Camera Fusion for 3D Object Detection
In this paper, we focus on the problem of radar and camera sensor fusion and propose a middle-fusion approach to exploit both radar and camera data for 3D object detection.
SUTD-TrafficQA: A Question Answering Benchmark and an Efficient Network for Video Reasoning over Traffic Events
In this paper, we create a novel dataset, SUTD-TrafficQA (Traffic Question Answering), which takes the form of video QA based on the collected 10, 080 in-the-wild videos and annotated 62, 535 QA pairs, for benchmarking the cognitive capability of causal inference and event understanding models in complex traffic scenarios.
Deep Multi-agent Reinforcement Learning for Highway On-Ramp Merging in Mixed Traffic
On-ramp merging is a challenging task for autonomous vehicles (AVs), especially in mixed traffic where AVs coexist with human-driven vehicles (HDVs).
aiMotive Dataset: A Multimodal Dataset for Robust Autonomous Driving with Long-Range Perception
The dataset consists of 176 scenes with synchronized and calibrated LiDAR, camera, and radar sensors covering a 360-degree field of view.
Beyond the Field-of-View: Enhancing Scene Visibility and Perception with Clip-Recurrent Transformer
In this paper, we propose the concept of online video inpainting for autonomous vehicles to expand the field of view, thereby enhancing scene visibility, perception, and system safety.
TUMTraf V2X Cooperative Perception Dataset
We propose CoopDet3D, a cooperative multi-modal fusion model, and TUMTraf-V2X, a perception dataset, for the cooperative 3D object detection and tracking task.
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
In this work, we introduce a defensive mechanism called defensive distillation to reduce the effectiveness of adversarial samples on DNNs.
Speeding up Semantic Segmentation for Autonomous Driving
We propose a novel deep network architecture for image segmentation that keeps the high accuracy while being efficient enough for embedded devices.
Benchmarking 6DOF Outdoor Visual Localization in Changing Conditions
Visual localization enables autonomous vehicles to navigate in their surroundings and augmented reality applications to link virtual to real worlds.