no code implementations • 27 May 2025 • Bao Pham, Gabriel Raya, Matteo Negri, Mohammed J. Zaki, Luca Ambrogioni, Dmitry Krotov
In the small data regime the diffusion model exhibits a strong memorization phase, where the network creates distinct basins of attraction around each sample in the training set, akin to the Hopfield model below the critical memory load.
no code implementations • 18 Apr 2025 • Dejan Stancevic, Luca Ambrogioni
In particular, we found that the image quality in pretrained EDM2 models, as evaluated by FID and FD-DINO scores, can be substantially increased by the rescaled entropic time reparameterization without increasing the number of function evaluations, with greater improvements in the few NFEs regime.
1 code implementation • 25 Feb 2025 • Gianluigi Silvestri, Luca Ambrogioni, Chieh-Hsin Lai, Yuhta Takida, Yuki Mitsufuji
Consistency Training (CT) has recently emerged as a promising alternative to diffusion models, achieving competitive performance in image generation tasks.
1 code implementation • 18 Oct 2024 • Felix Koulischer, Johannes Deleu, Gabriel Raya, Thomas Demeester, Luca Ambrogioni
Negative Prompting (NP) is widely utilized in diffusion models, particularly in text-to-image applications, to prevent the generation of undesired features.
no code implementations • 11 Oct 2024 • Beatrice Achilli, Enrico Ventura, Gianluigi Silvestri, Bao Pham, Gabriel Raya, Dmitry Krotov, Carlo Lucibello, Luca Ambrogioni
Generative diffusion processes are state-of-the-art machine learning models deeply connected with fundamental concepts in statistical physics.
no code implementations • 8 Oct 2024 • Enrico Ventura, Beatrice Achilli, Gianluigi Silvestri, Carlo Lucibello, Luca Ambrogioni
In this paper, we investigate the latent geometry of generative diffusion models under the manifold hypothesis.
no code implementations • 4 Jun 2024 • Louis Rouillard, Luca Ambrogioni, Demian Wassermann
The estimation of directed couplings between the nodes of a network from indirect measurements is a central methodological challenge in scientific fields such as neuroscience, systems biology and economics.
no code implementations • 26 Oct 2023 • Luca Ambrogioni
Generative diffusion models have achieved spectacular performance in many areas of machine learning and generative modeling.
no code implementations • 4 Oct 2023 • Luca Ambrogioni
The main contribution of the paper is the introduction of a large family of smooth non-reverting covariance functions that closely resemble the kernels commonly used in the GP literature (e. g. squared exponential and Mat\'ern class).
no code implementations • 29 Sep 2023 • Luca Ambrogioni
In this work we show that generative diffusion models can be interpreted as energy-based models and that, when trained on discrete patterns, their energy function is (asymptotically) identical to that of modern Hopfield networks.
1 code implementation • NeurIPS 2023 • Gabriel Raya, Luca Ambrogioni
In this paper, we show that the dynamics of these models exhibit a spontaneous symmetry breaking that divides the generative dynamics into two distinct phases: 1) A linear steady-state dynamics around a central fixed-point and 2) an attractor dynamics directed towards the data manifold.
1 code implementation • 19 May 2022 • Gianluigi Silvestri, Daan Roos, Luca Ambrogioni
In this work, we provide a deterministic alternative to the stochastic variational training of generative autoencoders.
1 code implementation • ICLR 2022 • Gianluigi Silvestri, Emily Fertig, Dave Moore, Luca Ambrogioni
We also introduce gated structured layers, which allow bypassing the parts of the models that fail to capture the statistics of the data.
no code implementations • 17 Sep 2021 • Luca Ambrogioni
There is a strong link between the general concept of intelligence and the ability to collect and use information.
no code implementations • 23 Feb 2021 • Sander Dalm, Nasir Ahmad, Luca Ambrogioni, Marcel van Gerven
Many of the recent advances in the field of artificial intelligence have been fueled by the highly successful backpropagation of error (BP) algorithm, which efficiently solves the credit assignment problem in artificial neural networks.
no code implementations • 9 Feb 2021 • Luca Ambrogioni, Gianluigi Silvestri, Marcel van Gerven
We evaluate the performance of the new variational programs in a series of structured inference problems.
no code implementations • 1 Jan 2021 • Thirza Dado, Yağmur Güçlütürk, Luca Ambrogioni, Gabrielle Ras, Sander E. Bosch, Marcel van Gerven, Umut Güçlü
We introduce a new framework for hyperrealistic reconstruction of perceived naturalistic stimuli from brain recordings.
no code implementations • 1 Jan 2021 • Gabrielle Ras, Luca Ambrogioni, Pim Haselager, Marcel van Gerven, Umut Güçlü
In a 3TConv the 3D convolutional filter is obtained by learning a 2D filter and a set of temporal transformation parameters, resulting in a sparse filter requiring less parameters.
no code implementations • 29 Jun 2020 • Gabriëlle Ras, Luca Ambrogioni, Pim Haselager, Marcel A. J. van Gerven, Umut Güçlü
Finally, we implicitly demonstrate that, in popular ConvNets, the 2DConv can be replaced with a 3TConv and that the weights can be transferred to yield pretrained 3TConvs.
1 code implementation • NeurIPS 2020 • Nasir Ahmad, Marcel A. J. van Gerven, Luca Ambrogioni
An alternative called target propagation proposes to solve this implausibility by using a top-down model of neural activity to convert an error at the output of a neural network into layer-wise and plausible 'targets' for every unit.
1 code implementation • 9 Mar 2020 • Nasir Ahmad, Luca Ambrogioni, Marcel A. J. van Gerven
We propose a solution to the weight transport problem, which questions the biological plausibility of the backpropagation algorithm.
2 code implementations • 3 Feb 2020 • Luca Ambrogioni, Kate Lin, Emily Fertig, Sharad Vikram, Max Hinne, Dave Moore, Marcel van Gerven
However, the performance of the variational approach depends on the choice of an appropriate variational family.
no code implementations • 29 Jan 2020 • Patrick Dallaire, Luca Ambrogioni, Ludovic Trottier, Umut Güçlü, Max Hinne, Philippe Giguère, Brahim Chaib-Draa, Marcel van Gerven, Francois Laviolette
This paper introduces the Indian Chefs Process (ICP), a Bayesian nonparametric prior on the joint space of infinite directed acyclic graphs (DAGs) and orders that generalizes Indian Buffet Processes.
no code implementations • 20 Dec 2019 • Gabriëlle Ras, Ron Dotsch, Luca Ambrogioni, Umut Güçlü, Marcel A. J. van Gerven
It is important that we understand the driving factors behind the predictions, in humans and in deep neural networks.
no code implementations • 9 Dec 2019 • Gabriëlle Ras, Luca Ambrogioni, Umut Güçlü, Marcel A. J. van Gerven
3D convolutional neural networks are difficult to train because they are parameter-expensive and data-hungry.
1 code implementation • 15 Nov 2019 • Max Hinne, David Leeftink, Marcel A. J. van Gerven, Luca Ambrogioni
Quasi-experimental research designs, such as regression discontinuity and interrupted time series, allow for causal inference in the absence of a randomized controlled trial, at the cost of additional assumptions.
no code implementations • 9 Jul 2019 • Luca Ambrogioni, Umut Güçlü, Marcel van Gerven
A possible way of dealing with this problem is to use an ensemble of GANs, where (ideally) each network models a single mode.
no code implementations • 31 Mar 2019 • Luca Ambrogioni, Marcel A. J. van Gerven
Furthermore, we introduce a family of variance reduction techniques that can be applied to other gradient estimators.
no code implementations • 7 Nov 2018 • Luca Ambrogioni, Umut Guclu, Marcel van Gerven
The solution of the resulting optimal transport problem provides both a particle approximation and a set of optimal transportation densities that map each particle to a segment of the posterior distribution.
no code implementations • NeurIPS 2018 • Luca Ambrogioni, Umut Güçlü, Yağmur Güçlütürk, Max Hinne, Eric Maris, Marcel A. J. van Gerven
This paper introduces Wasserstein variational inference, a new form of approximate Bayesian inference based on optimal transport theory.
no code implementations • 29 May 2018 • Luca Ambrogioni, Umut Güçlü, Julia Berezutskaya, Eva W. P. van den Borne, Yağmur Güçlütürk, Max Hinne, Eric Maris, Marcel A. J. van Gerven
In this paper, we introduce a new form of amortized variational inference by using the forward KL divergence in a joint-contrastive variational loss.
1 code implementation • 19 May 2017 • Luca Ambrogioni, Umut Güçlü, Marcel A. J. van Gerven, Eric Maris
In the Bayesian filtering example, we show that the method can be used to filter complex nonlinear and non-Gaussian signals defined on manifolds.
no code implementations • NeurIPS 2017 • Luca Ambrogioni, Max Hinne, Marcel van Gerven, Eric Maris
Here we propose to model this causal interaction using integro-differential equations and causal kernels that allow for a rich analysis of effective connectivity.
no code implementations • 10 Apr 2017 • Luca Ambrogioni, Eric Maris
This is possible because the posterior expectation of Gaussian process regression maps a finite set of samples to a function defined on the whole real line, expressed as a linear combination of covariance functions.
no code implementations • 17 Feb 2017 • Luca Ambrogioni, Umut Güçlü, Eric Maris, Marcel van Gerven
Estimating the state of a dynamical system from a series of noise-corrupted observations is fundamental in many areas of science and engineering.
no code implementations • 30 Nov 2016 • Luca Ambrogioni, Eric Maris
Furthermore, the complex-valued Gaussian process regression allows to incorporate prior information about the structure in signal and noise and thereby to tailor the analysis to the features of the signal.
no code implementations • 31 Oct 2016 • Luca Ambrogioni, Eric Maris
In this paper, we introduce a new framework for analyzing nonstationary time series using locally stationary Gaussian process analysis with parameters that are coupled through a hidden Markov model.
no code implementations • 9 May 2016 • Luca Ambrogioni, Marcel A. J. van Gerven, Eric Maris
Neural signals are characterized by rich temporal and spatiotemporal dynamics that reflect the organization of cortical networks.