Search Results for author: Georgios Batzolis

Found 7 papers, 1 papers with code

Variational Diffusion Auto-encoder: Latent Space Extraction from Pre-trained Diffusion Models

no code implementations24 Apr 2023 Georgios Batzolis, Jan Stanczuk, Carola-Bibiane Schönlieb

This issue stems from the unrealistic assumption that approximates the conditional data distribution, $p(\textbf{x} | \textbf{z})$, as an isotropic Gaussian.

Your diffusion model secretly knows the dimension of the data manifold

no code implementations23 Dec 2022 Jan Stanczuk, Georgios Batzolis, Teo Deveney, Carola-Bibiane Schönlieb

A diffusion model approximates the score function i. e. the gradient of the log density of a noise-corrupted version of the target distribution for varying levels of corruption.

Non-Uniform Diffusion Models

no code implementations20 Jul 2022 Georgios Batzolis, Jan Stanczuk, Carola-Bibiane Schönlieb, Christian Etmann

We show that non-uniform diffusion leads to multi-scale diffusion models which have similar structure to this of multi-scale normalizing flows.

Denoising

Conditional Image Generation with Score-Based Diffusion Models

1 code implementation26 Nov 2021 Georgios Batzolis, Jan Stanczuk, Carola-Bibiane Schönlieb, Christian Etmann

Score-based diffusion models have emerged as one of the most promising frameworks for deep generative modelling.

Conditional Image Generation

CAFLOW: Conditional Autoregressive Flows

no code implementations4 Jun 2021 Georgios Batzolis, Marcello Carioni, Christian Etmann, Soroosh Afyouni, Zoe Kourtzi, Carola Bibiane Schönlieb

We model the conditional distribution of the latent encodings by modeling the auto-regressive distributions with an efficient multi-scale normalizing flow, where each conditioning factor affects image synthesis at its respective resolution scale.

Image-to-Image Translation Translation

How to distribute data across tasks for meta-learning?

no code implementations15 Mar 2021 Alexandru Cioba, Michael Bromberg, Qian Wang, Ritwik Niyogi, Georgios Batzolis, Jezabel Garcia, Da-Shan Shiu, Alberto Bernacchia

We show that: 1) If tasks are homogeneous, there is a uniform optimal allocation, whereby all tasks get the same amount of data; 2) At fixed budget, there is a trade-off between number of tasks and number of data points per task, with a unique solution for the optimum; 3) When trained separately, harder task should get more data, at the cost of a smaller number of tasks; 4) When training on a mixture of easy and hard tasks, more data should be allocated to easy tasks.

Few-Shot Image Classification Meta-Learning

Optimal allocation of data across training tasks in meta-learning

no code implementations1 Jan 2021 Georgios Batzolis, Alberto Bernacchia, Da-Shan Shiu, Michael Bromberg, Alexandru Cioba

They are tested on benchmarks with a fixed number of data-points for each training task, and this number is usually arbitrary, for example, 5 instances per class in few-shot classification.

Few-Shot Image Classification Meta-Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.