1 code implementation • 14 Aug 2024 • Stas Syrota, Pablo Moreno-Muñoz, Søren Hauberg
Latent space geometry provides a rigorous and empirically valuable framework for interacting with the latent variables of deep generative models.
1 code implementation • 7 Jun 2024 • Miguel González-Duque, Richard Michael, Simon Bartels, Yevgen Zainchkovskyy, Søren Hauberg, Wouter Boomsma
Optimizing discrete black-box functions is key in several domains, e. g. protein engineering and drug design.
no code implementations • 5 Jun 2024 • Hrittik Roy, Marco Miani, Carl Henrik Ek, Philipp Hennig, Marvin Pförtner, Lukas Tatzel, Søren Hauberg
Current approximate posteriors in Bayesian neural networks (BNNs) exhibit a crucial limitation: they fail to maintain invariance under reparameterization, i. e. BNNs assign different posterior densities to different parametrizations of identical functions.
1 code implementation • 27 May 2024 • Nicholas Krämer, Pablo Moreno-Muñoz, Hrittik Roy, Søren Hauberg
Tuning scientific and probabilistic machine learning models -- for example, partial differential equations, Gaussian processes, or Bayesian neural networks -- often relies on evaluating functions of matrices whose size grows with the data set or the number of parameters.
1 code implementation • 26 Apr 2024 • Richard Michael, Simon Bartels, Miguel González-Duque, Yevgen Zainchkovskyy, Jes Frellsen, Søren Hauberg, Wouter Boomsma
To optimize efficiently over discrete data and with only few available target observations is a challenge in Bayesian optimization.
no code implementations • 4 Mar 2024 • Cong Geng, Tian Han, Peng-Tao Jiang, Hao Zhang, Jinwei Chen, Søren Hauberg, Bo Li
Generative models have shown strong generation ability while efficient likelihood estimation is less explored.
no code implementations • 17 Jan 2024 • Hadi Beik-Mohammadi, Søren Hauberg, Georgios Arvanitidis, Nadia Figueroa, Gerhard Neumann, Leonel Rozo
Stability guarantees are crucial when ensuring a fully autonomous robot does not take undesirable or potentially harmful actions.
1 code implementation • NeurIPS 2023 • Thoranna Bender, Simon Moe Sørensen, Alireza Kashani, K. Eldjarn Hjorleifsson, Grethe Hyldig, Søren Hauberg, Serge Belongie, Frederik Warburg
We demonstrate that this shared concept embedding space improves upon separate embedding spaces for coarse flavor classification (alcohol percentage, country, grape, price, rating) and aligns with the intricate human perception of flavor.
1 code implementation • 20 Jul 2023 • Johan Ziruo Ye, Thomas Ørkild, Peter Lempel Søndergaard, Søren Hauberg
Digital dentistry has made significant advancements, yet numerous challenges remain.
1 code implementation • 23 Mar 2023 • Kilian Zepf, Selma Wanna, Marco Miani, Juston Moore, Jes Frellsen, Søren Hauberg, Frederik Warburg, Aasa Feragen
Image segmentation relies heavily on neural networks which are known to be overconfident, especially when making predictions on out-of-distribution (OOD) images.
1 code implementation • 20 Dec 2022 • Alison Pouplin, David Eklund, Carl Henrik Ek, Søren Hauberg
Generative models are often stochastic, causing the data space, the Riemannian metric, and the geodesics, to be stochastic as well.
no code implementations • 10 Nov 2022 • Yevgen Zainchkovskyy, Jesper Ferkinghoff-Borg, Anja Bennett, Thomas Egebjerg, Nikolai Lorenzen, Per Jr. Greisen, Søren Hauberg, Carsten Stahlhut
Pre-trained protein language models have demonstrated significant applicability in different protein engineering task.
1 code implementation • 10 Sep 2022 • Pablo Moreno-Muñoz, Cilie W Feldager, Søren Hauberg
Decoders built on Gaussian processes (GPs) are enticing due to the marginalisation over the non-linear function space.
1 code implementation • 30 Jun 2022 • Marco Miani, Frederik Warburg, Pablo Moreno-Muñoz, Nicke Skafte Detlefsen, Søren Hauberg
In this work, we present a Bayesian autoencoder for unsupervised representation learning, which is trained using a novel variational lower-bound of the autoencoder evidence.
1 code implementation • 3 Jun 2022 • Helene Hauschultz, Rasmus Berg Palm. Pablo Moreno-Muños, Nicki Skafte Detlefsen, Andrew Allan du Plessis, Søren Hauberg
The encoder network of an autoencoder is an approximation of the nearest point projection onto the manifold spanned by the decoder.
no code implementations • 31 May 2022 • Miguel González-Duque, Rasmus Berg Palm, Søren Hauberg, Sebastian Risi
Deep generative models can automatically create content of diverse types.
no code implementations • 17 Mar 2022 • Andri Bergsson, Søren Hauberg
Faithful visualizations of data residing on manifolds must take the underlying geometry into account when producing a flat planar view of the data.
no code implementations • 15 Mar 2022 • Hadi Beik-Mohammadi, Søren Hauberg, Georgios Arvanitidis, Gerhard Neumann, Leonel Rozo
We argue that Riemannian manifolds may be learned via human demonstrations in which geodesics are natural motion skills.
no code implementations • 2 Mar 2022 • Federico Bergamin, Pierre-Alexandre Mattei, Jakob D. Havtorn, Hugo Senetaire, Hugo Schmutz, Lars Maaløe, Søren Hauberg, Jes Frellsen
These techniques, based on classical statistical tests, are model-agnostic in the sense that they can be applied to any differentiable generative model.
2 code implementations • 22 Feb 2022 • Simon Bartels, Kristoffer Stensbo-Smidt, Pablo Moreno-Muñoz, Wouter Boomsma, Jes Frellsen, Søren Hauberg
We present a method to approximate Gaussian process regression models for large datasets by considering only a subset of the data.
1 code implementation • 22 Feb 2022 • Jakob D. Havtorn, Lasse Borgholt, Søren Hauberg, Jes Frellsen, Lars Maaløe
Stochastic latent variable models (LVMs) achieve state-of-the-art performance on natural image generation but are still inferior to deterministic models on speech.
no code implementations • 3 Feb 2022 • Andrea Vallone, Frederik Warburg, Hans Hansen, Søren Hauberg, Javier Civera
Place recognition and visual localization are particularly challenging in wide baseline configurations.
no code implementations • NeurIPS 2021 • Pierre Segonne, Yevgen Zainchkovskyy, Søren Hauberg
As one cannot train without data, we provide mechanisms for generating pseudo-inputs in informative low-density regions of the input space, and show how to leverage these in a practical Bayesian framework that casts a prior distribution over the model uncertainty.
no code implementations • NeurIPS 2021 • Cong Geng, Jia Wang, Zhiyong Gao, Jes Frellsen, Søren Hauberg
Energy-based models (EBMs) provide an elegant framework for density estimation, but they are notoriously difficult to train.
no code implementations • 29 Sep 2021 • Jakob Drachmann Havtorn, Lasse Borgholt, Jes Frellsen, Søren Hauberg, Lars Maaløe
While stochastic latent variable models (LVMs) now achieve state-of-the-art performance on natural image generation, they are still inferior to deterministic models on speech.
1 code implementation • 9 Jun 2021 • Georgios Arvanitidis, Miguel González-Duque, Alison Pouplin, Dimitris Kalatzis, Søren Hauberg
Latent space geometry has shown itself to provide a rich and rigorous framework for interacting with the latent variables of deep generative models.
no code implementations • 8 Jun 2021 • Hadi Beik-Mohammadi, Søren Hauberg, Georgios Arvanitidis, Gerhard Neumann, Leonel Rozo
For robots to work alongside humans and perform in unstructured environments, they must learn new motion skills and adapt them to unseen situations on the fly.
no code implementations • 7 Jun 2021 • Dimitris Kalatzis, Johan Ziruo Ye, Alison Pouplin, Jesper Wohlert, Søren Hauberg
We present a framework for learning probability distributions on topologically non-trivial manifolds, utilizing normalizing flows.
4 code implementations • 16 Feb 2021 • Jakob D. Havtorn, Jes Frellsen, Søren Hauberg, Lars Maaløe
Deep generative models have been demonstrated as state-of-the-art density estimators.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
1 code implementation • 28 Nov 2020 • Nicki Skafte Detlefsen, Søren Hauberg, Wouter Boomsma
How we choose to represent our data has a fundamental impact on our ability to subsequently extract information from them.
no code implementations • ICCV 2021 • Frederik Warburg, Martin Jørgensen, Javier Civera, Søren Hauberg
Uncertainty quantification in image retrieval is crucial for downstream decisions, yet it remains a challenging and largely unexplored problem.
no code implementations • 12 Aug 2020 • Martin Jørgensen, Søren Hauberg
This study investigates one such invariant: the causal relationship between X and Y is invariant to the marginal distributions of X and Y.
no code implementations • 2 Aug 2020 • Georgios Arvanitidis, Søren Hauberg, Bernhard Schölkopf
A common assumption in generative models is that the generator immerses the latent space into a Euclidean ambient space.
no code implementations • 21 Jun 2020 • Martin Jørgensen, Søren Hauberg
We present a probabilistic model where the latent variable respects both the distances and the topology of the modeled data.
1 code implementation • 7 Apr 2020 • Pola Schwöbel, Frederik Warburg, Martin Jørgensen, Kristoffer H. Madsen, Søren Hauberg
Spatial Transformer Networks (STNs) estimate image transformations that can improve downstream tasks by `zooming in' on relevant regions in an image.
no code implementations • ICML 2020 • Dimitris Kalatzis, David Eklund, Georgios Arvanitidis, Søren Hauberg
Variational Autoencoders (VAEs) represent the given data in a low-dimensional latent space, which is generally assumed to be Euclidean.
no code implementations • 20 Aug 2019 • David Eklund, Søren Hauberg
Manifold learning seeks a low dimensional representation that faithfully captures the essence of data.
1 code implementation • NeurIPS 2019 • Nicki Skafte Detlefsen, Søren Hauberg
Disentangled representation learning finds compact, independent and easy-to-interpret factors of the data.
2 code implementations • NeurIPS 2019 • Nicki S. Detlefsen, Martin Jørgensen, Søren Hauberg
We propose and investigate new complementary methodologies for estimating predictive variance networks in regression neural networks.
no code implementations • 22 Jan 2019 • Georgios Arvanitidis, Søren Hauberg, Philipp Hennig, Michael Schober
We propose a fast, simple and robust algorithm for computing shortest paths and distances on Riemannian manifolds learned from data.
no code implementations • 13 Sep 2018 • Tao Yang, Georgios Arvanitidis, Dongmei Fu, Xiaogang Li, Søren Hauberg
Deep generative models are tremendously successful in learning low-dimensional latent representations that well-describe the data.
no code implementations • 13 Jun 2018 • Søren Hauberg
We investigate learning of the differential geometric structure of a data manifold embedded in a high-dimensional Euclidean space.
no code implementations • 23 May 2018 • Anton Mallasto, Søren Hauberg, Aasa Feragen
Latent variable models (LVMs) learn probabilistic models of data manifolds lying in an \emph{ambient} Euclidean space.
1 code implementation • ICLR 2018 • Georgios Arvanitidis, Lars Kai Hansen, Søren Hauberg
Deep generative models provide a systematic way to learn nonlinear data distributions, through a set of latent variables and a nonlinear "generator" function that maps latent points into the input space.
no code implementations • 3 Feb 2017 • Rudrasis Chakraborty, Søren Hauberg, Baba C. Vemuri
In this paper, we present a geometric framework for computing the principal linear subspaces in both situations as well as for the robust PCA case, that amounts to computing the intrinsic average on the space of all subspaces: the Grassmann manifold.
no code implementations • NeurIPS 2016 • Georgios Arvanitidis, Lars Kai Hansen, Søren Hauberg
The multivariate normal density is a monotonic function of the distance to the mean, and its ellipsoidal shape is due to the underlying Euclidean metric.
no code implementations • 9 Oct 2015 • Søren Hauberg, Oren Freifeld, Anders Boesen Lindbo Larsen, John W. Fisher III, Lars Kai Hansen
We then learn a class-specific probabilistic generative models of the transformations in a Riemannian submanifold of the Lie group of diffeomorphisms.
no code implementations • 27 Nov 2014 • Alessandra Tosi, Søren Hauberg, Alfredo Vellido, Neil D. Lawrence
We investigate the geometrical structure of probabilistic generative dimensionality reduction models using the tools of Riemannian geometry.
no code implementations • CVPR 2015 • Aasa Feragen, Francois Lauze, Søren Hauberg
However, we show that for spaces with conditionally negative definite distances the geodesic Laplacian kernel can be generalized while retaining positive definiteness.
no code implementations • 3 Jun 2013 • Philipp Hennig, Søren Hauberg
We study a probabilistic numerical method for the solution of both boundary and initial value problems that returns a joint Gaussian process posterior over the solution.
no code implementations • NeurIPS 2012 • Søren Hauberg, Oren Freifeld, Michael J. Black
We then show that this structure gives us a principled way to perform dimensionality reduction and regression according to the learned metrics.