Search Results for author: Xiyang Wang

Found 6 papers, 1 papers with code

DeepFusionMOT: A 3D Multi-Object Tracking Framework Based on Camera-LiDAR Fusion with Deep Association

1 code implementation24 Feb 2022 Xiyang Wang, Chunyun Fu, Zhankun Li, Ying Lai, JiaWei He

This association mechanism realizes tracking of an object in a 2D domain when the object is far away and only detected by the camera, and updating of the 2D trajectory with 3D information obtained when the object appears in the LiDAR field of view to achieve a smooth fusion of 2D and 3D trajectories.

3D Multi-Object Tracking

Hierarchical View Predictor: Unsupervised 3D Global Feature Learning through Hierarchical Prediction among Unordered Views

no code implementations8 Aug 2021 Zhizhong Han, Xiyang Wang, Yu-Shen Liu, Matthias Zwicker

To mine highly discriminative information from unordered views, HVP performs a novel hierarchical view prediction over a view pair, and aggregates the knowledge learned from the predictions in all view pairs into a global feature.

3DViewGraph: Learning Global Features for 3D Shapes from A Graph of Unordered Views with Attention

no code implementations17 May 2019 Zhizhong Han, Xiyang Wang, Chi-Man Vong, Yu-Shen Liu, Matthias Zwicker, C. L. Philip Chen

Then, the content and spatial information of each pair of view nodes are encoded by a novel spatial pattern correlation, where the correlation is computed among latent semantic patterns.

Y^2Seq2Seq: Cross-Modal Representation Learning for 3D Shape and Text by Joint Reconstruction and Prediction of View and Word Sequences

no code implementations7 Nov 2018 Zhizhong Han, Mingyang Shang, Xiyang Wang, Yu-Shen Liu, Matthias Zwicker

A recent method employs 3D voxels to represent 3D shapes, but this limits the approach to low resolutions due to the computational cost caused by the cubic complexity of 3D voxels.

3D Shape Representation Cross-Modal Retrieval +1

Cannot find the paper you are looking for? You can Submit a new open access paper.