Search Results for author: Marcello Carioni

Found 7 papers, 3 papers with code

Unsupervised approaches based on optimal transport and convex analysis for inverse problems in imaging

no code implementations15 Nov 2023 Marcello Carioni, Subhadip Mukherjee, Hong Ye Tan, Junqi Tang

Together with a detailed survey, we provide an overview of the key mathematical results that underlie the methods reviewed in the chapter to keep our discussion self-contained.

CAFLOW: Conditional Autoregressive Flows

no code implementations4 Jun 2021 Georgios Batzolis, Marcello Carioni, Christian Etmann, Soroosh Afyouni, Zoe Kourtzi, Carola Bibiane Schönlieb

We model the conditional distribution of the latent encodings by modeling the auto-regressive distributions with an efficient multi-scale normalizing flow, where each conditioning factor affects image synthesis at its respective resolution scale.

Image-to-Image Translation Translation

A generalized conditional gradient method for dynamic inverse problems with optimal transport regularization

1 code implementation21 Dec 2020 Kristian Bredies, Marcello Carioni, Silvio Fanzon, Francisco Romero

We develop a dynamic generalized conditional gradient method (DGCG) for dynamic inverse problems with optimal transport regularization.

Numerical Analysis Numerical Analysis Optimization and Control 65K10, 65J20, 90C49, 28A33, 35F05

Sinkhorn AutoEncoders

2 code implementations ICLR 2019 Giorgio Patrini, Rianne van den Berg, Patrick Forré, Marcello Carioni, Samarth Bhargav, Max Welling, Tim Genewein, Frank Nielsen

We show that minimizing the p-Wasserstein distance between the generator and the true data distribution is equivalent to the unconstrained min-min optimization of the p-Wasserstein distance between the encoder aggregated posterior and the prior in latent space, plus a reconstruction error.

Probabilistic Programming

Loss factorization, weakly supervised learning and label noise robustness

no code implementations8 Feb 2016 Giorgio Patrini, Frank Nielsen, Richard Nock, Marcello Carioni

We prove that the empirical risk of most well-known loss functions factors into a linear term aggregating all labels with a term that is label free, and can further be expressed by sums of the loss.

Generalization Bounds Weakly-supervised Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.