1 code implementation • • Mehdi S. M. Sajjadi, Henning Meyer, Etienne Pot, Urs Bergmann, Klaus Greff, Noha Radwan, Suhani Vora, Mario Lucic, Daniel Duckworth, Alexey Dosovitskiy, Jakob Uszkoreit, Thomas Funkhouser, Andrea Tagliasacchi
In this work, we propose the Scene Representation Transformer (SRT), a method which processes posed or unposed RGB images of a new area, infers a "set-latent scene representation", and synthesises novel views, all in a single feed-forward pass.
In this work we model the multivariate temporal dynamics of time series via an autoregressive deep learning model, where the data distribution is represented by a conditioned normalizing flow.
Cutting and pasting image segments feels intuitive: the choice of source templates gives artists flexibility in recombining existing source material.
Visualizing an outfit is an essential part of shopping for clothes.
We introduce a hierarchical Bayesian approach to tackle the challenging problem of size recommendation in e-commerce fashion.
To alleviate this problem, we propose a deep learning based content-collaborative methodology for personalized size and fit recommendation.
This helps the bandit framework to select the best agents early, since these rewards are smoother and less sparse than the environment reward.
Parametric generative deep models are state-of-the-art for photo and non-photo realistic image stylization.
To formally describe an optimal update direction, we introduce a theoretical framework which allows the derivation of requirements on both the divergence and corresponding method for determining an update direction, with these requirements guaranteeing unbiased mini-batch updates in the direction of steepest descent.
Ranked #2 on Image Generation on LSUN Bedroom 64 x 64
This work explores maximum likelihood optimization of neural networks through hypernetworks.
We present a novel method to solve image analogy problems : it allows to learn the relation between paired images present in training data, and then generalize and generate images that correspond to the relation, but were never seen in the training set.
Second, we show that the image generation with PSGANs has properties of a texture manifold: we can smoothly interpolate between samples in the structured noise space and generate novel samples, which lie perceptually between the textures of the original dataset.
Generative adversarial networks (GANs) are a recent approach to train generative models of data, which have been shown to work particularly well on image data.