Search Results for author: Haocheng Wan

Found 2 papers, 1 papers with code

PatchFormer: An Efficient Point Transformer with Patch Attention

no code implementations CVPR 2022 Zhang Cheng, Haocheng Wan, Xinyi Shen, Zizhao Wu

Extensive experiments demonstrate that our network achieves comparable accuracy on general point cloud learning tasks with 9. 2x speed-up than previous point Transformers.

Semantic Segmentation

PVT: Point-Voxel Transformer for Point Cloud Learning

2 code implementations13 Aug 2021 Cheng Zhang, Haocheng Wan, Xinyi Shen, Zizhao Wu

The recently developed pure Transformer architectures have attained promising accuracy on point cloud learning benchmarks compared to convolutional neural networks.

3D Object Detection 3D Part Segmentation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.