1 code implementation • 4 Jun 2024 • Tero Karras, Miika Aittala, Tuomas Kynkäänniemi, Jaakko Lehtinen, Timo Aila, Samuli Laine
The primary axes of interest in image-generating diffusion models are image quality, the amount of variation in the results, and how well the results align with a given condition, e. g., a class label or a text prompt.
Ranked #1 on Image Generation on ImageNet 512x512
1 code implementation • 11 Apr 2024 • Tuomas Kynkäänniemi, Miika Aittala, Tero Karras, Samuli Laine, Timo Aila, Jaakko Lehtinen
We show that guidance is clearly harmful toward the beginning of the chain (high noise levels), largely unnecessary toward the end (low noise levels), and only beneficial in the middle.
Ranked #5 on Image Generation on ImageNet 512x512
6 code implementations • CVPR 2024 • Tero Karras, Miika Aittala, Jaakko Lehtinen, Janne Hellsten, Timo Aila, Samuli Laine
Diffusion models currently dominate the field of data-driven image synthesis with their unparalleled scaling to large datasets.
Ranked #15 on Image Generation on ImageNet 512x512
1 code implementation • 23 Jan 2023 • Axel Sauer, Tero Karras, Samuli Laine, Andreas Geiger, Timo Aila
Text-to-image synthesis has recently seen significant progress thanks to large pretrained language models, large-scale training data, and the introduction of scalable model families such as diffusion and autoregressive models.
Ranked #18 on Text-to-Image Generation on MS COCO
no code implementations • 14 Dec 2022 • Onni Kosomaa, Samuli Laine, Tero Karras, Miika Aittala, Jaakko Lehtinen
We propose a deep learning method for 3D volumetric reconstruction in low-dose helical cone-beam computed tomography.
2 code implementations • 2 Nov 2022 • Yogesh Balaji, Seungjun Nah, Xun Huang, Arash Vahdat, Jiaming Song, Qinsheng Zhang, Karsten Kreis, Miika Aittala, Timo Aila, Samuli Laine, Bryan Catanzaro, Tero Karras, Ming-Yu Liu
Therefore, in contrast to existing works, we propose to train an ensemble of text-to-image diffusion models specialized for different synthesis stages.
Ranked #14 on Text-to-Image Generation on MS COCO
1 code implementation • 4 Jul 2022 • Erik Härkönen, Miika Aittala, Tuomas Kynkäänniemi, Samuli Laine, Timo Aila, Jaakko Lehtinen
We introduce the problem of disentangling time-lapse sequences in a way that allows separate, after-the-fact control of overall trends, cyclic effects, and random effects in the images, and describe a technique based on data-driven generative models that achieves this goal.
16 code implementations • 1 Jun 2022 • Tero Karras, Miika Aittala, Timo Aila, Samuli Laine
We argue that the theory and practice of diffusion-based generative models are currently unnecessarily convoluted and seek to remedy the situation by presenting a design space that clearly separates the concrete design choices.
7 code implementations • NeurIPS 2021 • Tero Karras, Miika Aittala, Samuli Laine, Erik Härkönen, Janne Hellsten, Jaakko Lehtinen, Timo Aila
We observe that despite their hierarchical convolutional nature, the synthesis process of typical generative adversarial networks depends on absolute pixel coordinates in an unhealthy manner.
Ranked #1 on Image Generation on FFHQ-U
1 code implementation • 6 Nov 2020 • Samuli Laine, Janne Hellsten, Tero Karras, Yeongho Seol, Jaakko Lehtinen, Timo Aila
We present a modular differentiable renderer design that yields performance superior to previous methods by leveraging existing, highly optimized hardware graphics pipelines.
28 code implementations • NeurIPS 2020 • Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, Timo Aila
We also find that the widely used CIFAR-10 is, in fact, a limited data benchmark, and improve the record FID from 5. 59 to 2. 42.
Ranked #1 on Conditional Image Generation on ArtBench-10 (32x32)
125 code implementations • CVPR 2020 • Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, Timo Aila
Overall, our improved model redefines the state of the art in unconditional image modeling, both in terms of existing distribution quality metrics as well as perceived image quality.
Ranked #1 on Image Generation on LSUN Car 256 x 256
no code implementations • 25 Sep 2019 • Geoff French, Timo Aila, Samuli Laine, Michal Mackiewicz, Graham Finlayson
Consistency regularization describes a class of approaches that have yielded ground breaking results in semi-supervised classification problems.
5 code implementations • 5 Jun 2019 • Geoff French, Samuli Laine, Timo Aila, Michal Mackiewicz, Graham Finlayson
We analyze the problem of semantic segmentation and find that its' distribution does not exhibit low density regions separating classes and offer this as an explanation for why semi-supervised segmentation is a challenging problem, with only a few reports of success.
10 code implementations • NeurIPS 2019 • Tuomas Kynkäänniemi, Tero Karras, Samuli Laine, Jaakko Lehtinen, Timo Aila
The ability to automatically estimate the quality and coverage of the samples produced by a generative model is a vital requirement for driving algorithm research.
Ranked #4 on Image Generation on FFHQ
no code implementations • ICLR Workshop LLD 2019 • Samuli Laine, Jaakko Lehtinen, Timo Aila
We describe techniques for training high-quality image denoising models that require only single instances of corrupted images as training data.
2 code implementations • NeurIPS 2019 • Samuli Laine, Tero Karras, Jaakko Lehtinen, Timo Aila
We describe a novel method for training high-quality image denoising models based on unorganized collections of corrupted images.
83 code implementations • CVPR 2019 • Tero Karras, Samuli Laine, Timo Aila
We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature.
Ranked #1 on Image Generation on LSUN Bedroom
21 code implementations • ICML 2018 • Jaakko Lehtinen, Jacob Munkberg, Jon Hasselgren, Samuli Laine, Tero Karras, Miika Aittala, Timo Aila
We apply basic statistical reasoning to signal reconstruction by machine learning -- learning to map corrupted observations to clean signals -- with a simple and powerful conclusion: it is possible to learn to restore images by only looking at corrupted examples, at performance at and sometimes exceeding training using clean data, without explicit image priors or likelihood models of the corruption.
114 code implementations • ICLR 2018 • Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
We describe a new training methodology for generative adversarial networks.
Ranked #4 on Image Generation on LSUN Horse 256 x 256 (Clean-FID (trainfull) metric)
no code implementations • SIGGRAPH 2017 • Tero Karras, Timo Aila, Samuli Laine, Antti Herva, Jaakko Lehtinen
Our deep neural network learns a mapping from input waveforms to the 3D vertex coordinates of a face model, and simultaneously discovers a compact, latent code that disambiguates the variations in facial expression that cannot be explained by the audio alone.
7 code implementations • 7 Oct 2016 • Samuli Laine, Timo Aila
In this paper, we present a simple and efficient method for training deep neural networks in a semi-supervised setting where only a small portion of training data is labeled.
1 code implementation • 21 Sep 2016 • Samuli Laine, Tero Karras, Timo Aila, Antti Herva, Shunsuke Saito, Ronald Yu, Hao Li, Jaakko Lehtinen
We present a real-time deep learning framework for video-based facial performance capture -- the dense 3D tracking of an actor's face given a monocular video.