no code implementations • 1 Sep 2023 • Marcel Hirt, Domenico Campolo, Victoria Leong, Juan-Pablo Ortega
To encode latent variables from different modality subsets, Product-of-Experts (PoE) or Mixture-of-Experts (MoE) aggregation schemes have been routinely used and shown to yield different trade-offs, for instance, regarding their generative quality or consistency across multiple modalities.
1 code implementation • 26 Aug 2023 • Marcel Hirt, Vasileios Kreouzis, Petros Dellaportas
Variational autoencoders (VAEs) are popular likelihood-based generative models which can be efficiently trained by maximizing an Evidence Lower Bound (ELBO).
1 code implementation • 1 Nov 2022 • Giovanni Ballarin, Petros Dellaportas, Lyudmila Grigoryeva, Marcel Hirt, Sophie van Huellen, Juan-Pablo Ortega
Macroeconomic forecasting has recently started embracing techniques that can deal with large-scale datasets and series with unequal release periods.
1 code implementation • NeurIPS 2021 • Marcel Hirt, Michalis K. Titsias, Petros Dellaportas
Hamiltonian Monte Carlo (HMC) is a popular Markov Chain Monte Carlo (MCMC) algorithm to sample from an unnormalized probability distribution.
1 code implementation • NeurIPS 2019 • Marcel Hirt, Petros Dellaportas, Alain Durmus
This family is based on new copula-like densities on the hypercube with non-uniform marginals which can be sampled efficiently, i. e. with a complexity linear in the dimension of state space.
no code implementations • 23 May 2018 • Marcel Hirt, Petros Dellaportas
We present a scalable approach to performing approximate fully Bayesian inference in generic state space models.