CompleteDT: Point Cloud Completion with Dense Augment Inference Transformers

30 May 2022  ·  Jun Li, Shangwei Guo, Shaokun Han ·

Point cloud completion task aims to predict the missing part of incomplete point clouds and generate complete point clouds with details. In this paper, we propose a novel point cloud completion network, namely CompleteDT. Specifically, features are learned from point clouds with different resolutions, which is sampled from the incomplete input, and are converted to a series of \textit{spots} based on the geometrical structure. Then, the Dense Relation Augment Module (DRA) based on the transformer is proposed to learn features within \textit{spots} and consider the correlation among these \textit{spots}. The DRA consists of Point Local-Attention Module (PLA) and Point Dense Multi-Scale Attention Module (PDMA), where the PLA captures the local information within the local \textit{spots} by adaptively measuring weights of neighbors and the PDMA exploits the global relationship between these \textit{spots} in a multi-scale densely connected manner. Lastly, the complete shape is predicted from \textit{spots} by the Multi-resolution Point Fusion Module (MPF), which gradually generates complete point clouds from \textit{spots}, and updates \textit{spots} based on these generated point clouds. Experimental results show that, because the DRA based on the transformer can learn the expressive features from the incomplete input and the MPF can fully explore these feature to predict the complete input, our method largely outperforms the state-of-the-art methods.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods