26 papers with code • 4 benchmarks • 4 datasets
Deep Generative Models (DGMs) are known for their superior capability in generating realistic data.
Our method tends to synthesize plausible layouts and objects, respecting the interplay of multiple objects in an image.
As the choice of words and syntax vary while preparing a textual description, it is challenging for the system to reliably produce a consistently desirable output from different forms of language input.
We present SPSG, a novel approach to generate high-quality, colored 3D models of scenes from RGB-D scan observations by learning to infer unobserved scene geometry and color in a self-supervised fashion.
A crucial ability of human intelligence is to build up models of individual 3D objects from partial scene observations.
Recently there is an increasing interest in scene generation within the research community.
The fact that the patch generation process is independent to each other inspires a wide range of new applications: firstly, "Patch-Inspired Image Generation" enables us to generate the entire image based on a single patch.
This paper studies how weight repetition ---when the same weight occurs multiple times in or across weight vectors--- can be exploited to save energy and improve performance during CNN inference.