no code implementations • 2 Oct 2024 • Willem Diepeveen, Georgios Batzolis, Zakhar Shumaylov, Carola-Bibiane Schönlieb
Data-driven Riemannian geometry has emerged as a powerful tool for interpretable representation learning, offering improved efficiency in downstream tasks.
no code implementations • 24 Apr 2023 • Georgios Batzolis, Jan Stanczuk, Carola-Bibiane Schönlieb
This issue stems from the unrealistic assumption that approximates the conditional data distribution, $p(\textbf{x} | \textbf{z})$, as an isotropic Gaussian.
no code implementations • 23 Dec 2022 • Jan Stanczuk, Georgios Batzolis, Teo Deveney, Carola-Bibiane Schönlieb
A diffusion model approximates the score function i. e. the gradient of the log density of a noise-corrupted version of the target distribution for varying levels of corruption.
no code implementations • 20 Jul 2022 • Georgios Batzolis, Jan Stanczuk, Carola-Bibiane Schönlieb, Christian Etmann
We show that non-uniform diffusion leads to multi-scale diffusion models which have similar structure to this of multi-scale normalizing flows.
1 code implementation • 26 Nov 2021 • Georgios Batzolis, Jan Stanczuk, Carola-Bibiane Schönlieb, Christian Etmann
Score-based diffusion models have emerged as one of the most promising frameworks for deep generative modelling.
no code implementations • 4 Jun 2021 • Georgios Batzolis, Marcello Carioni, Christian Etmann, Soroosh Afyouni, Zoe Kourtzi, Carola Bibiane Schönlieb
We model the conditional distribution of the latent encodings by modeling the auto-regressive distributions with an efficient multi-scale normalizing flow, where each conditioning factor affects image synthesis at its respective resolution scale.
no code implementations • 15 Mar 2021 • Alexandru Cioba, Michael Bromberg, Qian Wang, Ritwik Niyogi, Georgios Batzolis, Jezabel Garcia, Da-Shan Shiu, Alberto Bernacchia
We show that: 1) If tasks are homogeneous, there is a uniform optimal allocation, whereby all tasks get the same amount of data; 2) At fixed budget, there is a trade-off between number of tasks and number of data points per task, with a unique solution for the optimum; 3) When trained separately, harder task should get more data, at the cost of a smaller number of tasks; 4) When training on a mixture of easy and hard tasks, more data should be allocated to easy tasks.
no code implementations • 1 Jan 2021 • Georgios Batzolis, Alberto Bernacchia, Da-Shan Shiu, Michael Bromberg, Alexandru Cioba
They are tested on benchmarks with a fixed number of data-points for each training task, and this number is usually arbitrary, for example, 5 instances per class in few-shot classification.