Creation of 3D content by stylization is a promising yet challenging problem in computer vision and graphics research.
Deep-learning-based approaches for retinal lesion segmentation often require an abundant amount of precise pixel-wise annotated data.
In this work, we make the first attempt, to the best of our knowledge, to consider the local geometry information explicitly into the masked auto-encoding, and propose a novel Masked Surfel Prediction (MaskSurf) method.
Such a challenge of Simulation-to-Reality (Sim2Real) domain gap could be mitigated via learning algorithms of domain adaptation; however, we argue that generation of synthetic point clouds via more physically realistic rendering is a powerful alternative, as systematic non-uniform noise patterns can be captured.
Invertible neural networks based on Coupling Flows CFlows) have various applications such as image synthesis and data compression.
Many learning-based approaches have difficulty scaling to unseen data, as the generality of its learned prior is limited to the scale and variations of the training samples.