Neighborhood Normalization for Robust Geometric Feature Learning

Extracting geometric features from 3D models is a common first step in applications such as 3D registration, tracking, and scene flow estimation. Many hand-crafted and learning-based methods aim to produce consistent and distinguishable geometric features for 3D models with partial overlap. These methods work well in cases where the point density and scale of the overlapping 3D objects are similar, but struggle in applications where 3D data are obtained independently with unknown global scale and scene overlap. Unfortunately, instances of this resolution mismatch are common in practice, e.g., when aligning data from multiple sensors. In this work, we introduce a new normalization technique, Batch-Neighborhood Normalization, aiming to improve robustness to mean-std variation of local feature distributions that presumably can happen in samples with varying point density. We empirically demonstrate that the presented normalization method's performance compares favorably to comparison methods in indoor and outdoor environments, and on a clinical dataset, on common point registration benchmarks in both standard and, particularly, resolution-mismatch settings. The source code and clinical dataset are available at https://github.com/lppllppl920/NeighborhoodNormalization-Pytorch.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here