no code implementations • 22 Dec 2024 • Quan Dao, Hao Phung, Trung Dao, Dimitris Metaxas, Anh Tran
Flow matching has emerged as a promising framework for training generative models, demonstrating impressive empirical performance while offering relative ease of training compared to diffusion-based models.
1 code implementation • 13 Dec 2024 • Yair Schiff, Subham Sekhar Sahoo, Hao Phung, Guanghan Wang, Sam Boshar, Hugo Dalla-torre, Bernardo P. de Almeida, Alexander Rush, Thomas Pierrot, Volodymyr Kuleshov
Diffusion models for continuous data gained widespread adoption owing to their high quality generation and control mechanisms.
1 code implementation • 6 Nov 2024 • Hao Phung, Quan Dao, Trung Dao, Hoang Phan, Dimitris Metaxas, Anh Tran
We introduce a novel state-space architecture for diffusion models, effectively harnessing spatial and frequency information to enhance the inductive bias towards local features in input images for image generation tasks.
1 code implementation • 17 Jul 2023 • Quan Dao, Hao Phung, Binh Nguyen, Anh Tran
In this work, we propose to apply flow matching in the latent spaces of pretrained autoencoders, which offers improved computational efficiency and scalability for high-resolution image synthesis.
Ranked #4 on
Image Generation
on CelebA-HQ 256x256
1 code implementation • ICCV 2023 • Thanh Van Le, Hao Phung, Thuan Hoang Nguyen, Quan Dao, Ngoc Tran, Anh Tran
Despite the complicated formulation of DreamBooth and Diffusion-based text-to-image models, our methods effectively defend users from the malicious use of those models.
1 code implementation • CVPR 2023 • Hao Phung, Quan Dao, Anh Tran
Diffusion models are rising as a powerful solution for high-fidelity image generation, which exceeds GANs in quality in many circumstances.
Ranked #1 on
Image Generation
on CelebA-HQ 512x512