Sensor Fusion

89 papers with code • 0 benchmarks • 2 datasets

Sensor fusion is the process of combining sensor data or data derived from disparate sources such that the resulting information has less uncertainty than would be possible when these sources were used individually. [Wikipedia]

Latest papers with no code

ContextualFusion: Context-Based Multi-Sensor Fusion for 3D Object Detection in Adverse Operating Conditions

no code yet • 23 Apr 2024

The fusion of multimodal sensor data streams such as camera images and lidar point clouds plays an important role in the operation of autonomous vehicles (AVs).

In-situ process monitoring and adaptive quality enhancement in laser additive manufacturing: a critical review

no code yet • 21 Apr 2024

Future directions are proposed, with an emphasis on multimodal sensor fusion for multiscale defect prediction and fault diagnosis, ultimately enabling self-adaptation in LAM processes.

Event Cameras Meet SPADs for High-Speed, Low-Bandwidth Imaging

no code yet • 17 Apr 2024

Traditional cameras face a trade-off between low-light performance and high-speed imaging: longer exposure times to capture sufficient light results in motion blur, whereas shorter exposures result in Poisson-corrupted noisy images.

Enhanced Radar Perception via Multi-Task Learning: Towards Refined Data for Sensor Fusion Applications

no code yet • 9 Apr 2024

Radar and camera fusion yields robustness in perception tasks by leveraging the strength of both sensors.

Automated Lane Change Behavior Prediction and Environmental Perception Based on SLAM Technology

no code yet • 6 Apr 2024

In addition to environmental perception sensors such as cameras, radars, etc.

DifFUSER: Diffusion Model for Robust Multi-Sensor Fusion in 3D Object Detection and BEV Segmentation

no code yet • 6 Apr 2024

Diffusion models have recently gained prominence as powerful deep generative models, demonstrating unmatched performance across various domains.

3DGS-Calib: 3D Gaussian Splatting for Multimodal SpatioTemporal Calibration

no code yet • 18 Mar 2024

We introduce 3DGS-Calib, a new calibration method that relies on the speed and rendering accuracy of 3D Gaussian Splatting to achieve multimodal spatiotemporal calibration that is accurate, robust, and with a substantial speed-up compared to methods relying on implicit neural representations.

A Survey of IMU Based Cross-Modal Transfer Learning in Human Activity Recognition

no code yet • 17 Mar 2024

We also distinguish and expound on many related but inconsistently used terms in the literature, such as transfer learning, domain adaptation, representation learning, sensor fusion, and multimodal learning, and describe how cross-modal learning fits with all these concepts.

Integration of 5G and Motion Sensors for Vehicular Positioning: A Loosely-Coupled Approach

no code yet • 16 Mar 2024

Most of the 5G positioning literature relies on constant motion models to bridge such 5G outages, which do not capture the true dynamics of the vehicle.

Safe Road-Crossing by Autonomous Wheelchairs: a Novel Dataset and its Experimental Evaluation

no code yet • 13 Mar 2024

Safe road-crossing by self-driving vehicles is a crucial problem to address in smart-cities.