Weighted Bayesian Gaussian Mixture Model for Roadside LiDAR Object Detection

20 Apr 2022  ·  Tianya Zhang, Yi Ge, Peter J. Jin ·

Background modeling is widely used for intelligent surveillance systems to detect moving targets by subtracting the static background components. Most roadside LiDAR object detection methods filter out foreground points by comparing new data points to pre-trained background references based on descriptive statistics over many frames (e.g., voxel density, number of neighbors, maximum distance). However, these solutions are inefficient under heavy traffic, and parameter values are hard to transfer from one scenario to another. In early studies, the probabilistic background modeling methods widely used for the video-based system were considered unsuitable for roadside LiDAR surveillance systems due to the sparse and unstructured point cloud data. In this paper, the raw LiDAR data were transformed into a structured representation based on the elevation and azimuth value of each LiDAR point. With this high-order tensor representation, we break the barrier to allow efficient high-dimensional multivariate analysis for roadside LiDAR background modeling. The Bayesian Nonparametric (BNP) approach integrates the intensity value and 3D measurements to exploit the measurement data using 3D and intensity info entirely. The proposed method was compared against two state-of-the-art roadside LiDAR background models, computer vision benchmark, and deep learning baselines, evaluated at point, object, and path levels under heavy traffic and challenging weather. This multimodal Weighted Bayesian Gaussian Mixture Model (GMM) can handle dynamic backgrounds with noisy measurements and substantially enhances the infrastructure-based LiDAR object detection, whereby various 3D modeling for smart city applications could be created.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here