no code implementations • 11 Nov 2024 • Nvidia, :, Yuval Atzmon, Maciej Bala, Yogesh Balaji, Tiffany Cai, Yin Cui, Jiaojiao Fan, Yunhao Ge, Siddharth Gururani, Jacob Huffman, Ronald Isaac, Pooya Jannaty, Tero Karras, Grace Lam, J. P. Lewis, Aaron Licata, Yen-Chen Lin, Ming-Yu Liu, Qianli Ma, Arun Mallya, Ashlee Martino-Tarr, Doug Mendez, Seungjun Nah, Chris Pruett, Fitsum Reda, Jiaming Song, Ting-Chun Wang, Fangyin Wei, Xiaohui Zeng, Yu Zeng, Qinsheng Zhang
We introduce Edify Image, a family of diffusion models capable of generating photorealistic image content with pixel-perfect accuracy.
1 code implementation • 4 Jun 2024 • Tero Karras, Miika Aittala, Tuomas Kynkäänniemi, Jaakko Lehtinen, Timo Aila, Samuli Laine
The primary axes of interest in image-generating diffusion models are image quality, the amount of variation in the results, and how well the results align with a given condition, e. g., a class label or a text prompt.
Ranked #1 on
Image Generation
on ImageNet 512x512
1 code implementation • 11 Apr 2024 • Tuomas Kynkäänniemi, Miika Aittala, Tero Karras, Samuli Laine, Timo Aila, Jaakko Lehtinen
We show that guidance is clearly harmful toward the beginning of the chain (high noise levels), largely unnecessary toward the end (low noise levels), and only beneficial in the middle.
Ranked #5 on
Image Generation
on ImageNet 512x512
6 code implementations • CVPR 2024 • Tero Karras, Miika Aittala, Jaakko Lehtinen, Janne Hellsten, Timo Aila, Samuli Laine
Diffusion models currently dominate the field of data-driven image synthesis with their unparalleled scaling to large datasets.
Ranked #15 on
Image Generation
on ImageNet 512x512
no code implementations • ICCV 2023 • Eric R. Chan, Koki Nagano, Matthew A. Chan, Alexander W. Bergman, Jeong Joon Park, Axel Levy, Miika Aittala, Shalini De Mello, Tero Karras, Gordon Wetzstein
We present a diffusion-based model for 3D-aware generative novel view synthesis from as few as a single input image.
1 code implementation • 23 Jan 2023 • Axel Sauer, Tero Karras, Samuli Laine, Andreas Geiger, Timo Aila
Text-to-image synthesis has recently seen significant progress thanks to large pretrained language models, large-scale training data, and the introduction of scalable model families such as diffusion and autoregressive models.
Ranked #18 on
Text-to-Image Generation
on MS COCO
no code implementations • 14 Dec 2022 • Onni Kosomaa, Samuli Laine, Tero Karras, Miika Aittala, Jaakko Lehtinen
We propose a deep learning method for 3D volumetric reconstruction in low-dose helical cone-beam computed tomography.
2 code implementations • 2 Nov 2022 • Yogesh Balaji, Seungjun Nah, Xun Huang, Arash Vahdat, Jiaming Song, Qinsheng Zhang, Karsten Kreis, Miika Aittala, Timo Aila, Samuli Laine, Bryan Catanzaro, Tero Karras, Ming-Yu Liu
Therefore, in contrast to existing works, we propose to train an ensemble of text-to-image diffusion models specialized for different synthesis stages.
Ranked #14 on
Text-to-Image Generation
on MS COCO
1 code implementation • 7 Jun 2022 • Tim Brooks, Janne Hellsten, Miika Aittala, Ting-Chun Wang, Timo Aila, Jaakko Lehtinen, Ming-Yu Liu, Alexei A. Efros, Tero Karras
Existing video generation methods often fail to produce new content as a function of time while maintaining consistencies expected in real environments, such as plausible dynamics and object persistence.
16 code implementations • 1 Jun 2022 • Tero Karras, Miika Aittala, Timo Aila, Samuli Laine
We argue that the theory and practice of diffusion-based generative models are currently unnecessarily convoluted and seek to remedy the situation by presenting a design space that clearly separates the concrete design choices.
2 code implementations • 11 Mar 2022 • Tuomas Kynkäänniemi, Tero Karras, Miika Aittala, Timo Aila, Jaakko Lehtinen
Fr\'echet Inception Distance (FID) is the primary metric for ranking models in data-driven generative modeling.
2 code implementations • CVPR 2022 • Eric R. Chan, Connor Z. Lin, Matthew A. Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas Guibas, Jonathan Tremblay, Sameh Khamis, Tero Karras, Gordon Wetzstein
Unsupervised generation of high-quality multi-view-consistent images and 3D shapes using only collections of single-view 2D photographs has been a long-standing challenge.
7 code implementations • NeurIPS 2021 • Tero Karras, Miika Aittala, Samuli Laine, Erik Härkönen, Janne Hellsten, Jaakko Lehtinen, Timo Aila
We observe that despite their hierarchical convolutional nature, the synthesis process of typical generative adversarial networks depends on absolute pixel coordinates in an unhealthy manner.
Ranked #1 on
Image Generation
on FFHQ-U
1 code implementation • 6 Nov 2020 • Samuli Laine, Janne Hellsten, Tero Karras, Yeongho Seol, Jaakko Lehtinen, Timo Aila
We present a modular differentiable renderer design that yields performance superior to previous methods by leveraging existing, highly optimized hardware graphics pipelines.
28 code implementations • NeurIPS 2020 • Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, Timo Aila
We also find that the widely used CIFAR-10 is, in fact, a limited data benchmark, and improve the record FID from 5. 59 to 2. 42.
Ranked #1 on
Conditional Image Generation
on ArtBench-10 (32x32)
no code implementations • ICML 2020 • Weili Nie, Tero Karras, Animesh Garg, Shoubhik Debnath, Anjul Patney, Ankit B. Patel, Anima Anandkumar
Disentanglement learning is crucial for obtaining disentangled representations and controllable generation.
125 code implementations • CVPR 2020 • Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, Timo Aila
Overall, our improved model redefines the state of the art in unconditional image modeling, both in terms of existing distribution quality metrics as well as perceived image quality.
Ranked #1 on
Image Generation
on LSUN Car 256 x 256
no code implementations • 25 Sep 2019 • Weili Nie, Tero Karras, Animesh Garg, Shoubhik Debhath, Anjul Patney, Ankit B. Patel, Anima Anandkumar
Generative adversarial networks (GANs) have achieved great success at generating realistic samples.
10 code implementations • ICCV 2019 • Ming-Yu Liu, Xun Huang, Arun Mallya, Tero Karras, Timo Aila, Jaakko Lehtinen, Jan Kautz
Unsupervised image-to-image translation methods learn to map images in a given class to an analogous image in a different class, drawing on unstructured (non-registered) datasets of images.
10 code implementations • NeurIPS 2019 • Tuomas Kynkäänniemi, Tero Karras, Samuli Laine, Jaakko Lehtinen, Timo Aila
The ability to automatically estimate the quality and coverage of the samples produced by a generative model is a vital requirement for driving algorithm research.
Ranked #4 on
Image Generation
on FFHQ
2 code implementations • NeurIPS 2019 • Samuli Laine, Tero Karras, Jaakko Lehtinen, Timo Aila
We describe a novel method for training high-quality image denoising models based on unorganized collections of corrupted images.
83 code implementations • CVPR 2019 • Tero Karras, Samuli Laine, Timo Aila
We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature.
Ranked #1 on
Image Generation
on LSUN Bedroom
21 code implementations • ICML 2018 • Jaakko Lehtinen, Jacob Munkberg, Jon Hasselgren, Samuli Laine, Tero Karras, Miika Aittala, Timo Aila
We apply basic statistical reasoning to signal reconstruction by machine learning -- learning to map corrupted observations to clean signals -- with a simple and powerful conclusion: it is possible to learn to restore images by only looking at corrupted examples, at performance at and sometimes exceeding training using clean data, without explicit image priors or likelihood models of the corruption.
114 code implementations • ICLR 2018 • Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
We describe a new training methodology for generative adversarial networks.
Ranked #4 on
Image Generation
on LSUN Horse 256 x 256
(Clean-FID (trainfull) metric)
no code implementations • SIGGRAPH 2017 • Tero Karras, Timo Aila, Samuli Laine, Antti Herva, Jaakko Lehtinen
Our deep neural network learns a mapping from input waveforms to the 3D vertex coordinates of a face model, and simultaneously discovers a compact, latent code that disambiguates the variations in facial expression that cannot be explained by the audio alone.
8 code implementations • 19 Nov 2016 • Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz
We propose a new criterion based on Taylor expansion that approximates the change in the cost function induced by pruning network parameters.
1 code implementation • 21 Sep 2016 • Samuli Laine, Tero Karras, Timo Aila, Antti Herva, Shunsuke Saito, Ronald Yu, Hao Li, Jaakko Lehtinen
We present a real-time deep learning framework for video-based facial performance capture -- the dense 3D tracking of an actor's face given a monocular video.