Search Results for author: Peishan Cong

Found 9 papers, 3 papers with code

LaserHuman: Language-guided Scene-aware Human Motion Generation in Free Environment

1 code implementation20 Mar 2024 Peishan Cong, Ziyi Wang, Zhiyang Dou, Yiming Ren, Wei Yin, Kai Cheng, Yujing Sun, Xiaoxiao Long, Xinge Zhu, Yuexin Ma

Language-guided scene-aware human motion generation has great significance for entertainment and robotics.

Human-centric Scene Understanding for 3D Large-scale Scenarios

1 code implementation ICCV 2023 Yiteng Xu, Peishan Cong, Yichen Yao, Runnan Chen, Yuenan Hou, Xinge Zhu, Xuming He, Jingyi Yu, Yuexin Ma

Human-centric scene understanding is significant for real-world applications, but it is extremely challenging due to the existence of diverse human poses and actions, complex human-environment interactions, severe occlusions in crowds, etc.

Action Recognition Scene Understanding +1

WildRefer: 3D Object Localization in Large-scale Dynamic Scenes with Multi-modal Visual Data and Natural Language

no code implementations12 Apr 2023 Zhenxiang Lin, Xidong Peng, Peishan Cong, Yuenan Hou, Xinge Zhu, Sibei Yang, Yuexin Ma

We introduce the task of 3D visual grounding in large-scale dynamic scenes based on natural linguistic descriptions and online captured multi-modal visual data, including 2D images and 3D LiDAR point clouds.

Autonomous Driving Object Localization +1

Weakly Supervised 3D Multi-person Pose Estimation for Large-scale Scenes based on Monocular Camera and Single LiDAR

no code implementations30 Nov 2022 Peishan Cong, Yiteng Xu, Yiming Ren, Juze Zhang, Lan Xu, Jingya Wang, Jingyi Yu, Yuexin Ma

Motivated by this, we propose a monocular camera and single LiDAR-based method for 3D multi-person pose estimation in large-scale scenes, which is easy to deploy and insensitive to light.

3D Multi-Person Pose Estimation 3D Pose Estimation +2

LiCamGait: Gait Recognition in the Wild by Using LiDAR and Camera Multi-modal Visual Sensors

no code implementations22 Nov 2022 Xiao Han, Peishan Cong, Lan Xu, Jingya Wang, Jingyi Yu, Yuexin Ma

LiDAR can capture accurate depth information in large-scale scenarios without the effect of light conditions, and the captured point cloud contains gait-related 3D geometric properties and dynamic motion characteristics.

Gait Recognition in the Wild

LiDAR-aid Inertial Poser: Large-scale Human Motion Capture by Sparse Inertial and LiDAR Sensors

no code implementations30 May 2022 Yiming Ren, Chengfeng Zhao, Yannan He, Peishan Cong, Han Liang, Jingyi Yu, Lan Xu, Yuexin Ma

We propose a multi-sensor fusion method for capturing challenging 3D human motions with accurate consecutive local poses and global trajectories in large-scale scenarios, only using single LiDAR and 4 IMUs, which are set up conveniently and worn lightly.

Sensor Fusion Translation

STCrowd: A Multimodal Dataset for Pedestrian Perception in Crowded Scenes

1 code implementation CVPR 2022 Peishan Cong, Xinge Zhu, Feng Qiao, Yiming Ren, Xidong Peng, Yuenan Hou, Lan Xu, Ruigang Yang, Dinesh Manocha, Yuexin Ma

In addition, considering the property of sparse global distribution and density-varying local distribution of pedestrians, we further propose a novel method, Density-aware Hierarchical heatmap Aggregation (DHA), to enhance pedestrian perception in crowded scenes.

Pedestrian Detection Sensor Fusion

Self-supervised Point Cloud Completion on Real Traffic Scenes via Scene-concerned Bottom-up Mechanism

no code implementations20 Mar 2022 Yiming Ren, Peishan Cong, Xinge Zhu, Yuexin Ma

In this paper, we propose a self-supervised point cloud completion method (TraPCC) for vehicles in real traffic scenes without any complete data.

Point Cloud Completion

Input-Output Balanced Framework for Long-tailed LiDAR Semantic Segmentation

no code implementations26 Mar 2021 Peishan Cong, Xinge Zhu, Yuexin Ma

A thorough and holistic scene understanding is crucial for autonomous vehicles, where LiDAR semantic segmentation plays an indispensable role.

Autonomous Vehicles LIDAR Semantic Segmentation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.