We use this framework to design Fourier PNFs, which match state-of-the-art performance in signal representation tasks that use neural fields.
In this work, we address an important problem of optical see through (OST) augmented reality: non-negative image synthesis.
In applications such as optical see-through and projector augmented reality, producing images amounts to solving non-negative image generation, where one can only add light to an existing image.
Point cloud generation thus amounts to moving randomly sampled points to high-density areas.
Specifically, we learn a two-level hierarchy of distributions where the first level is the distribution of shapes and the second level is the distribution of points given a shape.
Ranked #3 on Point Cloud Generation on ShapeNet Car
Low precision operations can provide scalability, memory savings, portability, and energy efficiency.
To address these two challenges, we propose a novel learning based discriminative evaluation metric that is directly trained to distinguish between human and machine-generated captions.