Search Results for author: Lingfei Ma

Found 7 papers, 2 papers with code

Toronto-3D: A Large-scale Mobile LiDAR Dataset for Semantic Segmentation of Urban Roadways

1 code implementation18 Mar 2020 Weikai Tan, Nannan Qin, Lingfei Ma, Ying Li, Jing Du, Guorong Cai, Ke Yang, Jonathan Li

Semantic segmentation of large-scale outdoor point clouds is essential for urban scene understanding in various applications, especially autonomous driving and urban high-definition (HD) mapping.

Autonomous Driving Scene Understanding +2

OpenGF: An Ultra-Large-Scale Ground Filtering Dataset Built Upon Open ALS Point Clouds Around the World

1 code implementation24 Jan 2021 Nannan Qin, Weikai Tan, Lingfei Ma, Dedong Zhang, Jonathan Li

Ground filtering has remained a widely studied but incompletely resolved bottleneck for decades in the automatic generation of high-precision digital elevation model, due to the dramatic changes of topography and the complex structures of objects.

3D Semantic Segmentation Scene Understanding

Deep Learning for LiDAR Point Clouds in Autonomous Driving: A Review

no code implementations20 May 2020 Ying Li, Lingfei Ma, Zilong Zhong, Fei Liu, Dongpu Cao, Jonathan Li, Michael A. Chapman

In this paper, we provide a systematic review of existing compelling deep learning architectures applied in LiDAR point clouds, detailing for specific tasks in autonomous driving such as segmentation, detection, and classification.

3D Semantic Segmentation Autonomous Driving +4

A CNN Approach to Simultaneously Count Plants and Detect Plantation-Rows from UAV Imagery

no code implementations31 Dec 2020 Lucas Prado Osco, Mauro dos Santos de Arruda, Diogo Nunes Gonçalves, Alexandre Dias, Juliana Batistoti, Mauricio de Souza, Felipe David Georges Gomes, Ana Paula Marques Ramos, Lúcio André de Castro Jorge, Veraldo Liesenberg, Jonathan Li, Lingfei Ma, José Marcato Junior, Wesley Nunes Gonçalves

In the corn plantation datasets (with both growth phases, young and mature), our approach returned a mean absolute error (MAE) of 6. 224 plants per image patch, a mean relative error (MRE) of 0. 1038, precision and recall values of 0. 856, and 0. 905, respectively, and an F-measure equal to 0. 876.

Graph Representation Learning for Infrared and Visible Image Fusion

no code implementations1 Nov 2023 Jing Li, Lu Bai, Bin Yang, Chang Li, Lingfei Ma, Edwin R. Hancock

Then, GCNs are performed on the concatenate intra-modal NLss features of infrared and visible images, which can explore the cross-domain NLss of inter-modal to reconstruct the fused image.

Graph Representation Learning Infrared And Visible Image Fusion

Dual-modal Prior Semantic Guided Infrared and Visible Image Fusion for Intelligent Transportation System

no code implementations24 Mar 2024 Jing Li, Lu Bai, Bin Yang, Chang Li, Lingfei Ma, Lixin Cui, Edwin R. Hancock

Therefore, we propose a novel prior semantic guided image fusion method based on the dual-modality strategy, improving the performance of IVF in ITS.

Infrared And Visible Image Fusion Semantic Segmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.