Search Results for author: Andreas Blattmann

Found 8 papers, 7 papers with code

Text-Guided Synthesis of Artistic Images with Retrieval-Augmented Diffusion Models

1 code implementation26 Jul 2022 Robin Rombach, Andreas Blattmann, Björn Ommer

In RDMs, a set of nearest neighbors is retrieved from an external database during training for each training instance, and the diffusion model is conditioned on these informative samples.

Image Generation Prompt Engineering +1

Semi-Parametric Neural Image Synthesis

2 code implementations25 Apr 2022 Andreas Blattmann, Robin Rombach, Kaan Oktay, Jonas Müller, Björn Ommer

Much of this success is due to the scalability of these architectures and hence caused by a dramatic increase in model complexity and in the computational resources invested in training these models.

Image Generation Retrieval

High-Resolution Image Synthesis with Latent Diffusion Models

13 code implementations CVPR 2022 Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer

By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond.

Denoising Image Inpainting +3

ImageBART: Bidirectional Context with Multinomial Diffusion for Autoregressive Image Synthesis

no code implementations NeurIPS 2021 Patrick Esser, Robin Rombach, Andreas Blattmann, Björn Ommer

Thus, in contrast to pure autoregressive models, it can solve free-form image inpainting and, in the case of conditional models, local, text-guided image modification without requiring mask-specific training.

Image Inpainting

iPOKE: Poking a Still Image for Controlled Stochastic Video Synthesis

2 code implementations ICCV 2021 Andreas Blattmann, Timo Milbich, Michael Dorkenwald, Björn Ommer

There will be distinctive movement, despite evident variations caused by the stochastic nature of our world.

Understanding Object Dynamics for Interactive Image-to-Video Synthesis

1 code implementation CVPR 2021 Andreas Blattmann, Timo Milbich, Michael Dorkenwald, Björn Ommer

Given a static image of an object and a local poking of a pixel, the approach then predicts how the object would deform over time.

Video Prediction

Stochastic Image-to-Video Synthesis using cINNs

1 code implementation CVPR 2021 Michael Dorkenwald, Timo Milbich, Andreas Blattmann, Robin Rombach, Konstantinos G. Derpanis, Björn Ommer

Video understanding calls for a model to learn the characteristic interplay between static scene content and its dynamics: Given an image, the model must be able to predict a future progression of the portrayed scene and, conversely, a video should be explained in terms of its static image content and all the remaining characteristics not present in the initial frame.

Video Understanding

Behavior-Driven Synthesis of Human Dynamics

1 code implementation CVPR 2021 Andreas Blattmann, Timo Milbich, Michael Dorkenwald, Björn Ommer

Using this representation, we are able to change the behavior of a person depicted in an arbitrary posture, or to even directly transfer behavior observed in a given video sequence.

Human Dynamics

Cannot find the paper you are looking for? You can Submit a new open access paper.