51 papers with code • 0 benchmarks • 3 datasets
Sensor Fusion is the broad category of combining various on-board sensors to produce better measurement estimates. These sensors are combined to compliment each other and overcome individual shortcomings.
We highlight that our system is a general framework, which can easily fuse various global sensors in a unified pose graph optimization.
By sensor fusion, we can compensate the deficiencies of stand-alone sensors and provide more reliable estimations.
Our proposed framework is composed of two parts: the filter-based odometry and factor graph optimization.
Sensor Fusion Robotics
In this paper, we provide a sensor fusion scheme integrating camera videos, consumer-grade motion sensors (GPS/IMU), and a 3D semantic map in order to achieve robust self-localization and semantic segmentation for autonomous driving.
LATTE: Accelerating LiDAR Point Cloud Annotation via Sensor Fusion, One-Click Annotation, and Tracking
2) One-click annotation: Instead of drawing 3D bounding boxes or point-wise labels, we simplify the annotation to just one click on the target object, and automatically generate the bounding box for the target.
In this paper, we focus on the problem of radar and camera sensor fusion and propose a middle-fusion approach to exploit both radar and camera data for 3D object detection.
Ranked #69 on 3D Object Detection on nuScenes
Object detection in camera images, using deep learning has been proven successfully in recent years.
We present FIERY: a probabilistic future prediction model in bird's-eye view from monocular cameras.
Ranked #1 on Bird's-Eye View Semantic Segmentation on nuScenes