1 code implementation • 9 Oct 2023 • Gulcin Baykal, Melih Kandemir, Gozde Unal
We evidentially monitor the significance of attaining the probability distribution over the codebook embeddings, in contrast to softmax usage.
1 code implementation • 4 Jul 2023 • Gulcin Baykal, Halil Faruk Karagoz, Taha Binhuraib, Gozde Unal
Diffusion models are generative models that have shown significant advantages compared to other generative models in terms of higher generation quality and more stable training.
no code implementations • 2 Apr 2023 • Halil Faruk Karagoz, Gulcin Baykal, Irem Arikan Eksi, Gozde Unal
The fine-tuned diffusion model is trained with this newly created dataset, and its results are compared with the baseline models visually and numerically.
1 code implementation • 3 Nov 2020 • Gulcin Baykal, Furkan Ozcelik, Gozde Unal
Lastly, we show the contribution of the self-supervision tasks to the GAN training on the loss landscape and present that the effects of these tasks may not be cooperative to the adversarial training in some settings.
1 code implementation • 15 Jun 2020 • Gulcin Baykal, Gozde Unal
Generative Adversarial Networks (GANs) triggered an increased interest in problem of image generation due to their improved output image quality and versatility for expansion towards new methods.