We confirm the benefits of our contributions in controlled experiments and report substantial gains in stability and realism in comparison to recent image-to-image translation methods and a variety of other baselines.
Neural rendering techniques promise efficient photo-realistic image synthesis while at the same time providing rich control over scene parameters by learning the physical image formation process.
The task of generating natural images from 3D scenes has been a long standing goal in computer graphics.
Existing methods for 3D scene flow estimation often fail in the presence of large displacement or local ambiguities, e. g., at texture-less or reflective surfaces.
Further, we demonstrate the utility of our approach on training standard deep models for semantic instance segmentation and object detection of cars in outdoor driving scenes.
We study the quadratic assignment problem, in computer vision also known as graph matching.