Compositionally Generalizable 3D Structure Prediction

4 Dec 2020  ·  Songfang Han, Jiayuan Gu, Kaichun Mo, Li Yi, Siyu Hu, Xuejin Chen, Hao Su ·

Single-image 3D shape reconstruction is an important and long-standing problem in computer vision. A plethora of existing works is constantly pushing the state-of-the-art performance in the deep learning era. However, there remains a much more difficult and under-explored issue on how to generalize the learned skills over unseen object categories that have very different shape geometry distributions. In this paper, we bring in the concept of compositional generalizability and propose a novel framework that could better generalize to these unseen categories. We factorize the 3D shape reconstruction problem into proper sub-problems, each of which is tackled by a carefully designed neural sub-module with generalizability concerns. The intuition behind our formulation is that object parts (slates and cylindrical parts), their relationships (adjacency and translation symmetry), and shape substructures (T-junctions and a symmetric group of parts) are mostly shared across object categories, even though object geometries may look very different (e.g. chairs and cabinets). Experiments on PartNet show that we achieve superior performance than state-of-the-art. This validates our problem factorization and network designs.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here