no code implementations • 26 Oct 2023 • Luca Ambrogioni
While the fundamental ideas behind these models come from non-equilibrium physics, variational inference and stochastic calculus, in this paper we show that many aspects of these models can be understood using the tools of equilibrium statistical mechanics.
no code implementations • 4 Oct 2023 • Luca Ambrogioni
Stationary covariance functions are favorite in machine learning applications.
no code implementations • 29 Sep 2023 • Luca Ambrogioni
In this work we show that generative diffusion models can be interpreted as energy-based models and that, when trained on discrete patterns, their energy function is (asymptotically) identical to that of modern Hopfield networks.
1 code implementation • 19 May 2022 • Gianluigi Silvestri, Daan Roos, Luca Ambrogioni
In this work, we provide a deterministic alternative to the stochastic variational training of generative autoencoders.
1 code implementation • ICLR 2022 • Gianluigi Silvestri, Emily Fertig, Dave Moore, Luca Ambrogioni
We also introduce gated structured layers, which allow bypassing the parts of the models that fail to capture the statistics of the data.
no code implementations • 17 Sep 2021 • Luca Ambrogioni
There is a strong link between the general concept of intelligence and the ability to collect and use information.
no code implementations • 23 Feb 2021 • Sander Dalm, Nasir Ahmad, Luca Ambrogioni, Marcel van Gerven
Many of the recent advances in the field of artificial intelligence have been fueled by the highly successful backpropagation of error (BP) algorithm, which efficiently solves the credit assignment problem in artificial neural networks.
no code implementations • 9 Feb 2021 • Luca Ambrogioni, Gianluigi Silvestri, Marcel van Gerven
We evaluate the performance of the new variational programs in a series of structured inference problems.
no code implementations • 1 Jan 2021 • Thirza Dado, Yağmur Güçlütürk, Luca Ambrogioni, Gabrielle Ras, Sander E. Bosch, Marcel van Gerven, Umut Güçlü
We introduce a new framework for hyperrealistic reconstruction of perceived naturalistic stimuli from brain recordings.
no code implementations • 1 Jan 2021 • Gabrielle Ras, Luca Ambrogioni, Pim Haselager, Marcel van Gerven, Umut Güçlü
In a 3TConv the 3D convolutional filter is obtained by learning a 2D filter and a set of temporal transformation parameters, resulting in a sparse filter requiring less parameters.
no code implementations • 29 Jun 2020 • Gabriëlle Ras, Luca Ambrogioni, Pim Haselager, Marcel A. J. van Gerven, Umut Güçlü
Finally, we implicitly demonstrate that, in popular ConvNets, the 2DConv can be replaced with a 3TConv and that the weights can be transferred to yield pretrained 3TConvs.
1 code implementation • NeurIPS 2020 • Nasir Ahmad, Marcel A. J. van Gerven, Luca Ambrogioni
An alternative called target propagation proposes to solve this implausibility by using a top-down model of neural activity to convert an error at the output of a neural network into layer-wise and plausible 'targets' for every unit.
1 code implementation • 9 Mar 2020 • Nasir Ahmad, Luca Ambrogioni, Marcel A. J. van Gerven
We propose a solution to the weight transport problem, which questions the biological plausibility of the backpropagation algorithm.
2 code implementations • 3 Feb 2020 • Luca Ambrogioni, Kate Lin, Emily Fertig, Sharad Vikram, Max Hinne, Dave Moore, Marcel van Gerven
However, the performance of the variational approach depends on the choice of an appropriate variational family.
no code implementations • 29 Jan 2020 • Patrick Dallaire, Luca Ambrogioni, Ludovic Trottier, Umut Güçlü, Max Hinne, Philippe Giguère, Brahim Chaib-Draa, Marcel van Gerven, Francois Laviolette
This paper introduces the Indian Chefs Process (ICP), a Bayesian nonparametric prior on the joint space of infinite directed acyclic graphs (DAGs) and orders that generalizes Indian Buffet Processes.
no code implementations • 20 Dec 2019 • Gabriëlle Ras, Ron Dotsch, Luca Ambrogioni, Umut Güçlü, Marcel A. J. van Gerven
It is important that we understand the driving factors behind the predictions, in humans and in deep neural networks.
no code implementations • 9 Dec 2019 • Gabriëlle Ras, Luca Ambrogioni, Umut Güçlü, Marcel A. J. van Gerven
3D convolutional neural networks are difficult to train because they are parameter-expensive and data-hungry.
1 code implementation • 15 Nov 2019 • Max Hinne, David Leeftink, Marcel A. J. van Gerven, Luca Ambrogioni
Quasi-experimental research designs, such as regression discontinuity and interrupted time series, allow for causal inference in the absence of a randomized controlled trial, at the cost of additional assumptions.
no code implementations • 9 Jul 2019 • Luca Ambrogioni, Umut Güçlü, Marcel van Gerven
A possible way of dealing with this problem is to use an ensemble of GANs, where (ideally) each network models a single mode.
no code implementations • 31 Mar 2019 • Luca Ambrogioni, Marcel A. J. van Gerven
Furthermore, we introduce a family of variance reduction techniques that can be applied to other gradient estimators.
no code implementations • 7 Nov 2018 • Luca Ambrogioni, Umut Guclu, Marcel van Gerven
The solution of the resulting optimal transport problem provides both a particle approximation and a set of optimal transportation densities that map each particle to a segment of the posterior distribution.
no code implementations • NeurIPS 2018 • Luca Ambrogioni, Umut Güçlü, Yağmur Güçlütürk, Max Hinne, Eric Maris, Marcel A. J. van Gerven
This paper introduces Wasserstein variational inference, a new form of approximate Bayesian inference based on optimal transport theory.
no code implementations • 29 May 2018 • Luca Ambrogioni, Umut Güçlü, Julia Berezutskaya, Eva W. P. van den Borne, Yağmur Güçlütürk, Max Hinne, Eric Maris, Marcel A. J. van Gerven
In this paper, we introduce a new form of amortized variational inference by using the forward KL divergence in a joint-contrastive variational loss.
1 code implementation • 19 May 2017 • Luca Ambrogioni, Umut Güçlü, Marcel A. J. van Gerven, Eric Maris
In the Bayesian filtering example, we show that the method can be used to filter complex nonlinear and non-Gaussian signals defined on manifolds.
no code implementations • NeurIPS 2017 • Luca Ambrogioni, Max Hinne, Marcel van Gerven, Eric Maris
Here we propose to model this causal interaction using integro-differential equations and causal kernels that allow for a rich analysis of effective connectivity.
no code implementations • 10 Apr 2017 • Luca Ambrogioni, Eric Maris
This is possible because the posterior expectation of Gaussian process regression maps a finite set of samples to a function defined on the whole real line, expressed as a linear combination of covariance functions.
no code implementations • 17 Feb 2017 • Luca Ambrogioni, Umut Güçlü, Eric Maris, Marcel van Gerven
Estimating the state of a dynamical system from a series of noise-corrupted observations is fundamental in many areas of science and engineering.
no code implementations • 30 Nov 2016 • Luca Ambrogioni, Eric Maris
Furthermore, the complex-valued Gaussian process regression allows to incorporate prior information about the structure in signal and noise and thereby to tailor the analysis to the features of the signal.
no code implementations • 31 Oct 2016 • Luca Ambrogioni, Eric Maris
In this paper, we introduce a new framework for analyzing nonstationary time series using locally stationary Gaussian process analysis with parameters that are coupled through a hidden Markov model.
no code implementations • 9 May 2016 • Luca Ambrogioni, Marcel A. J. van Gerven, Eric Maris
Neural signals are characterized by rich temporal and spatiotemporal dynamics that reflect the organization of cortical networks.