1 code implementation • 8 Dec 2024 • Anton Baumann, Rui Li, Marcus Klasson, Santeri Mentu, Shyamgopal Karthik, Zeynep Akata, Arno Solin, Martin Trapp
Vision-language models (VLMs), such as CLIP and SigLIP, have found remarkable success in classification, retrieval, and generative tasks.
no code implementations • 27 Nov 2024 • Rui Li, Marcus Klasson, Arno Solin, Martin Trapp
The rising interest in Bayesian deep learning (BDL) has led to a plethora of methods for estimating the posterior distribution.
no code implementations • 8 Aug 2024 • Oliver Broadrick, William Cao, Benjie Wang, Martin Trapp, Guy Van Den Broeck
We show that for distributions over binary random variables these representations (PMF and CDF) are essentially equivalent, in the sense that one can be transformed to the other in polynomial time.
no code implementations • 22 May 2024 • Lingyun Yao, Martin Trapp, Jelin Leslin, Gaurav Singh, Peng Zhang, Karthekeyan Periasamy, Martin Andraud
Hence, hardware-efficient computation of PCs is highly interesting for edge computing applications.
no code implementations • 11 Apr 2024 • Rui Li, Martin Trapp, Marcus Klasson, Arno Solin
Deployment of deep neural networks in real-world settings typically requires adaptation to new tasks with few examples.
1 code implementation • NeurIPS 2023 • Zhongjie Yu, Martin Trapp, Kristian Kersting
In many real-world scenarios, it is crucial to be able to reliably and efficiently reason under uncertainty while capturing complex relationships in data.
2 code implementations • 1 Oct 2023 • Lorenzo Loconte, Aleksanteri M. Sladek, Stefan Mengel, Martin Trapp, Arno Solin, Nicolas Gillis, Antonio Vergari
Mixture models are traditionally represented and learned by adding several distributions as components.
no code implementations • 27 Sep 2023 • Xuanlong Yu, Yi Zuo, Zitao Wang, Xiaowen Zhang, Jiaxuan Zhao, Yuting Yang, Licheng Jiao, Rui Peng, Xinyi Wang, Junpei Zhang, Kexin Zhang, Fang Liu, Roberto Alcover-Couso, Juan C. SanMiguel, Marcos Escudero-Viñolo, Hanlin Tian, Kenta Matsui, Tianhao Wang, Fahmy Adan, Zhitong Gao, Xuming He, Quentin Bouniot, Hossein Moghaddam, Shyam Nandan Rai, Fabio Cermelli, Carlo Masone, Andrea Pilzer, Elisa Ricci, Andrei Bursuc, Arno Solin, Martin Trapp, Rui Li, Angela Yao, Wenlong Chen, Ivor Simpson, Neill D. F. Campbell, Gianni Franchi
This paper outlines the winning solutions employed in addressing the MUAD uncertainty quantification challenge held at ICCV 2023.
1 code implementation • 13 Feb 2023 • Lassi Meronen, Martin Trapp, Andrea Pilzer, Le Yang, Arno Solin
Dynamic neural networks are a recent technique that promises a remedy for the increasing size of modern deep learning models by dynamically adapting their computational cost to the difficulty of the inputs.
1 code implementation • 31 Jan 2023 • Ella Tamir, Martin Trapp, Arno Solin
We integrate Bayesian filtering and optimal control into learning the diffusion process, enabling the generation of constrained stochastic processes governed by sparse observations at intermediate stages and terminal constraints.
1 code implementation • 16 Aug 2022 • Subhankar Roy, Martin Trapp, Andrea Pilzer, Juho Kannala, Nicu Sebe, Elisa Ricci, Arno Solin
Source-free domain adaptation (SFDA) aims to adapt a classifier to an unlabelled target data set by only using a pre-trained source model.
no code implementations • 17 Jun 2022 • Ari Heljakka, Martin Trapp, Juho Kannala, Arno Solin
This observed 'predictive' multiplicity (PM) also implies elusive differences in the internals of the models, their 'representational' multiplicity (RM).
2 code implementations • NeurIPS 2021 • Lassi Meronen, Martin Trapp, Arno Solin
Neural network models are known to reinforce hidden data biases, making them unreliable and difficult to interpret.
1 code implementation • 16 Jun 2021 • Zhongjie Yu, Mingye Zhu, Martin Trapp, Arseny Skryagin, Kristian Kersting
Inspired by recent advances in the field of expert-based approximations of Gaussian processes (GPs), we present an expert-based approach to large-scale multi-output regression using single-output GP experts.
1 code implementation • 22 Jun 2020 • Perttu Hämäläinen, Martin Trapp, Tuure Saloheimo, Arno Solin
We propose Deep Residual Mixture Models (DRMMs), a novel deep generative model architecture.
2 code implementations • 4 May 2020 • Tomas Pevny, Vasek Smidl, Martin Trapp, Ondrej Polacek, Tomas Oberhuber
In this work, we propose Sum-Product-Transform Networks (SPTN), an extension of sum-product networks that uses invertible transformations as additional internal nodes.
1 code implementation • ICML 2020 • Robert Peharz, Steven Lang, Antonio Vergari, Karl Stelzner, Alejandro Molina, Martin Trapp, Guy Van Den Broeck, Kristian Kersting, Zoubin Ghahramani
Probabilistic circuits (PCs) are a promising avenue for probabilistic modeling, as they permit a wide range of exact and efficient inference routines.
2 code implementations • 7 Feb 2020 • Mohamed Tarek, Kai Xu, Martin Trapp, Hong Ge, Zoubin Ghahramani
Since DynamicPPL is a modular, stand-alone library, any probabilistic programming system written in Julia, such as Turing. jl, can use DynamicPPL to specify models and trace their model parameters.
no code implementations • pproximateinference AABI Symposium 2019 • Philipp Gabler, Martin Trapp, Hong Ge, Franz Pernkopf
Many modern machine learning algorithms, such as automatic differentiation (AD) and versions of approximate Bayesian inference, can be understood as a particular case of message passing on some computation graph.
1 code implementation • pproximateinference AABI Symposium 2019 • Kai Xu, Hong Ge, Will Tebbutt, Mohamed Tarek, Martin Trapp, Zoubin Ghahramani
Stan's Hamilton Monte Carlo (HMC) has demonstrated remarkable sampling robustness and efficiency in a wide range of Bayesian inference problems through carefully crafted adaption schemes to the celebrated No-U-Turn sampler (NUTS) algorithm.
1 code implementation • 10 Oct 2019 • Martin Trapp, Robert Peharz, Franz Pernkopf, Carl E. Rasmussen
Gaussian Processes (GPs) are powerful non-parametric Bayesian regression models that allow exact posterior inference, but exhibit high computational and memory costs.
1 code implementation • NeurIPS 2019 • Martin Trapp, Robert Peharz, Hong Ge, Franz Pernkopf, Zoubin Ghahramani
While parameter learning in SPNs is well developed, structure learning leaves something to be desired: Even though there is a plethora of SPN structure learners, most of them are somewhat ad-hoc and based on intuition rather than a clear learning principle.
1 code implementation • 20 May 2019 • Martin Trapp, Robert Peharz, Franz Pernkopf
It seems to be a pearl of conventional wisdom that parameter learning in deep sum-product networks is surprisingly fast compared to shallow mixture models.
1 code implementation • 12 Sep 2018 • Martin Trapp, Robert Peharz, Carl E. Rasmussen, Franz Pernkopf
In this paper, we introduce a natural and expressive way to tackle these problems, by incorporating GPs in sum-product networks (SPNs), a recently proposed tractable probabilistic model allowing exact and efficient inference.
no code implementations • 5 Jun 2018 • Robert Peharz, Antonio Vergari, Karl Stelzner, Alejandro Molina, Martin Trapp, Kristian Kersting, Zoubin Ghahramani
The need for consistent treatment of uncertainty has recently triggered increased interest in probabilistic deep learning methods.
1 code implementation • 10 Oct 2017 • Martin Trapp, Tamas Madl, Robert Peharz, Franz Pernkopf, Robert Trappl
In several domains obtaining class annotations is expensive while at the same time unlabelled data are abundant.