Assembling Semantically-Disentangled Representations for Predictive-Generative Models via Adaptation from Synthetic Domain

23 Feb 2020  ·  Burkay Donderici, Caleb New, Chenliang Xu ·

Deep neural networks can form high-level hierarchical representations of input data. Various researchers have demonstrated that these representations can be used to enable a variety of useful applications. However, such representations are typically based on the statistics within the data, and may not conform with the semantic representation that may be necessitated by the application. Conditional models are typically used to overcome this challenge, but they require large annotated datasets which are difficult to come by and costly to create. In this paper, we show that semantically-aligned representations can be generated instead with the help of a physics based engine. This is accomplished by creating a synthetic dataset with decoupled attributes, learning an encoder for the synthetic dataset, and augmenting prescribed attributes from the synthetic domain with attributes from the real domain. It is shown that the proposed (SYNTH-VAE-GAN) method can construct a conditional predictive-generative model of human face attributes without relying on real data labels.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here