Autonomous Vehicles
534 papers with code • 1 benchmarks • 27 datasets
Autonomous vehicles is the task of making a vehicle that can guide itself without human conduction.
Many of the state-of-the-art results can be found at more general task pages such as 3D Object Detection and Semantic Segmentation.
( Image credit: GSNet: Joint Vehicle Pose and Shape Reconstruction with Geometrical and Scene-aware Supervision )
Libraries
Use these libraries to find Autonomous Vehicles models and implementationsDatasets
Subtasks
Latest papers
Belief Aided Navigation using Bayesian Reinforcement Learning for Avoiding Humans in Blind Spots
Recent research on mobile robot navigation has focused on socially aware navigation in crowded environments.
PreSight: Enhancing Autonomous Vehicle Perception with City-Scale NeRF Priors
Autonomous vehicles rely extensively on perception systems to navigate and interpret their surroundings.
MonoOcc: Digging into Monocular Semantic Occupancy Prediction
However, existing methods rely on a complex cascaded framework with relatively limited information to restore 3D scenes, including a dependency on supervision solely on the whole network's output, single-frame input, and the utilization of a small backbone.
Fine-Grained Pillar Feature Encoding Via Spatio-Temporal Virtual Grid for 3D Object Detection
Through STV grids, points within each pillar are individually encoded using Vertical PFE (V-PFE), Temporal PFE (T-PFE), and Horizontal PFE (H-PFE).
TUMTraf V2X Cooperative Perception Dataset
We propose CoopDet3D, a cooperative multi-modal fusion model, and TUMTraf-V2X, a perception dataset, for the cooperative 3D object detection and tracking task.
Explicit Interaction for Fusion-Based Place Recognition
Fusion-based place recognition is an emerging technique jointly utilizing multi-modal perception data, to recognize previously visited places in GPS-denied scenarios for robots and autonomous vehicles.
Active propulsion noise shaping for multi-rotor aircraft localization
Multi-rotor aerial autonomous vehicles (MAVs) primarily rely on vision for navigation purposes.
Hybrid Reasoning Based on Large Language Models for Autonomous Car Driving
Large Language Models (LLMs) have garnered significant attention for their ability to understand text and images, generate human-like text, and perform complex reasoning tasks.
PC-NeRF: Parent-Child Neural Radiance Fields Using Sparse LiDAR Frames in Autonomous Driving Environments
With extensive experiments, PC-NeRF is proven to achieve high-precision novel LiDAR view synthesis and 3D reconstruction in large-scale scenes.
MODIPHY: Multimodal Obscured Detection for IoT using PHantom Convolution-Enabled Faster YOLO
Low-light conditions and occluded scenarios impede object detection in real-world Internet of Things (IoT) applications like autonomous vehicles and security systems.