1 code implementation • CVPR 2022 • Mehdi S. M. Sajjadi, Henning Meyer, Etienne Pot, Urs Bergmann, Klaus Greff, Noha Radwan, Suhani Vora, Mario Lucic, Daniel Duckworth, Alexey Dosovitskiy, Jakob Uszkoreit, Thomas Funkhouser, Andrea Tagliasacchi
In this work, we propose the Scene Representation Transformer (SRT), a method which processes posed or unposed RGB images of a new area, infers a "set-latent scene representation", and synthesises novel views, all in a single feed-forward pass.
1 code implementation • ICLR 2021 • Kashif Rasul, Abdul-Saboor Sheikh, Ingmar Schuster, Urs Bergmann, Roland Vollgraf
In this work we model the multivariate temporal dynamics of time series via an autoregressive deep learning model, where the data distribution is represented by a conditioned normalizing flow.
no code implementations • 16 Oct 2019 • Nikolay Jetchev, Urs Bergmann, Gökhan Yildirim
Cutting and pasting image segments feels intuitive: the choice of source templates gives artists flexibility in recombining existing source material.
no code implementations • 6 Sep 2019 • Kashif Rasul, Ingmar Schuster, Roland Vollgraf, Urs Bergmann
We present a generative model that is defined on finite sets of exchangeable, potentially high dimensional, data.
no code implementations • 23 Aug 2019 • Gökhan Yildirim, Nikolay Jetchev, Roland Vollgraf, Urs Bergmann
Visualizing an outfit is an essential part of shopping for clothes.
Conditional Image Generation
Vocal Bursts Intensity Prediction
no code implementations • 2 Aug 2019 • Romain Guigourès, Yuen King Ho, Evgenii Koriagin, Abdul-Saboor Sheikh, Urs Bergmann, Reza Shirvany
We introduce a hierarchical Bayesian approach to tackle the challenging problem of size recommendation in e-commerce fashion.
3 code implementations • 23 Jul 2019 • Abdul-Saboor Sheikh, Romain Guigoures, Evgenii Koriagin, Yuen King Ho, Reza Shirvany, Roland Vollgraf, Urs Bergmann
To alleviate this problem, we propose a deep learning based content-collaborative methodology for personalized size and fit recommendation.
no code implementations • ICLR 2019 • Gökhan Yildirim, Nikolay Jetchev, Urs Bergmann
In addition, we illustrate that simple guidance functions we use in UD-GAN-G allow us to directly capture the desired variations in the data.
no code implementations • 10 Feb 2019 • Andreas Merentitis, Kashif Rasul, Roland Vollgraf, Abdul-Saboor Sheikh, Urs Bergmann
This helps the bandit framework to select the best agents early, since these rewards are smoother and less sparse than the environment reward.
5 code implementations • 22 Nov 2018 • Nikolay Jetchev, Urs Bergmann, Gokhan Yildirim
Parametric generative deep models are state-of-the-art for photo and non-photo realistic image stylization.
2 code implementations • 20 Jun 2018 • Gökhan Yildirim, Calvin Seward, Urs Bergmann
In this paper, we propose a method that disentangles the effects of multiple input conditions in Generative Adversarial Networks (GANs).
1 code implementation • ICML 2018 • Calvin Seward, Thomas Unterthiner, Urs Bergmann, Nikolay Jetchev, Sepp Hochreiter
To formally describe an optimal update direction, we introduce a theoretical framework which allows the derivation of requirements on both the divergence and corresponding method for determining an update direction, with these requirements guaranteeing unbiased mini-batch updates in the direction of steepest descent.
Ranked #2 on
Image Generation
on LSUN Bedroom 64 x 64
no code implementations • 4 Dec 2017 • Abdul-Saboor Sheikh, Kashif Rasul, Andreas Merentitis, Urs Bergmann
This work explores maximum likelihood optimization of neural networks through hypernetworks.
no code implementations • 1 Dec 2017 • Nikolay Jetchev, Urs Bergmann, Calvin Seward
This paper presents a novel framework for generating texture mosaics with convolutional neural networks.
2 code implementations • 14 Sep 2017 • Nikolay Jetchev, Urs Bergmann
We present a novel method to solve image analogy problems : it allows to learn the relation between paired images present in training data, and then generalize and generate images that correspond to the relation, but were never seen in the training set.
7 code implementations • ICML 2017 • Urs Bergmann, Nikolay Jetchev, Roland Vollgraf
Second, we show that the image generation with PSGANs has properties of a texture manifold: we can smoothly interpolate between samples in the structured noise space and generate novel samples, which lie perceptually between the textures of the original dataset.
3 code implementations • 24 Nov 2016 • Nikolay Jetchev, Urs Bergmann, Roland Vollgraf
Generative adversarial networks (GANs) are a recent approach to train generative models of data, which have been shown to work particularly well on image data.