Search Results for author: Søren Hauberg

Found 51 papers, 20 papers with code

Decoder ensembling for learned latent geometries

1 code implementation14 Aug 2024 Stas Syrota, Pablo Moreno-Muñoz, Søren Hauberg

Latent space geometry provides a rigorous and empirically valuable framework for interacting with the latent variables of deep generative models.

Decoder

Reparameterization invariance in approximate Bayesian inference

no code implementations5 Jun 2024 Hrittik Roy, Marco Miani, Carl Henrik Ek, Philipp Hennig, Marvin Pförtner, Lukas Tatzel, Søren Hauberg

Current approximate posteriors in Bayesian neural networks (BNNs) exhibit a crucial limitation: they fail to maintain invariance under reparameterization, i. e. BNNs assign different posterior densities to different parametrizations of identical functions.

Bayesian Inference

Gradients of Functions of Large Matrices

1 code implementation27 May 2024 Nicholas Krämer, Pablo Moreno-Muñoz, Hrittik Roy, Søren Hauberg

Tuning scientific and probabilistic machine learning models -- for example, partial differential equations, Gaussian processes, or Bayesian neural networks -- often relies on evaluating functions of matrices whose size grows with the data set or the number of parameters.

Gaussian Processes

A Continuous Relaxation for Discrete Bayesian Optimization

1 code implementation26 Apr 2024 Richard Michael, Simon Bartels, Miguel González-Duque, Yevgen Zainchkovskyy, Jes Frellsen, Søren Hauberg, Wouter Boomsma

To optimize efficiently over discrete data and with only few available target observations is a challenge in Bayesian optimization.

Bayesian Optimization

Improving Adversarial Energy-Based Model via Diffusion Process

no code implementations4 Mar 2024 Cong Geng, Tian Han, Peng-Tao Jiang, Hao Zhang, Jinwei Chen, Søren Hauberg, Bo Li

Generative models have shown strong generation ability while efficient likelihood estimation is less explored.

Denoising Density Estimation

Neural Contractive Dynamical Systems

no code implementations17 Jan 2024 Hadi Beik-Mohammadi, Søren Hauberg, Georgios Arvanitidis, Nadia Figueroa, Gerhard Neumann, Leonel Rozo

Stability guarantees are crucial when ensuring a fully autonomous robot does not take undesirable or potentially harmful actions.

Learning to Taste: A Multimodal Wine Dataset

1 code implementation NeurIPS 2023 Thoranna Bender, Simon Moe Sørensen, Alireza Kashani, K. Eldjarn Hjorleifsson, Grethe Hyldig, Søren Hauberg, Serge Belongie, Frederik Warburg

We demonstrate that this shared concept embedding space improves upon separate embedding spaces for coarse flavor classification (alcohol percentage, country, grape, price, rating) and aligns with the intricate human perception of flavor.

Laplacian Segmentation Networks Improve Epistemic Uncertainty Quantification

1 code implementation23 Mar 2023 Kilian Zepf, Selma Wanna, Marco Miani, Juston Moore, Jes Frellsen, Søren Hauberg, Frederik Warburg, Aasa Feragen

Image segmentation relies heavily on neural networks which are known to be overconfident, especially when making predictions on out-of-distribution (OOD) images.

Image Segmentation Segmentation +2

Identifying latent distances with Finslerian geometry

1 code implementation20 Dec 2022 Alison Pouplin, David Eklund, Carl Henrik Ek, Søren Hauberg

Generative models are often stochastic, causing the data space, the Riemannian metric, and the geodesics, to be stochastic as well.

Navigate

Revisiting Active Sets for Gaussian Process Decoders

1 code implementation10 Sep 2022 Pablo Moreno-Muñoz, Cilie W Feldager, Søren Hauberg

Decoders built on Gaussian processes (GPs) are enticing due to the marginalisation over the non-linear function space.

Decoder Gaussian Processes +1

Laplacian Autoencoders for Learning Stochastic Representations

1 code implementation30 Jun 2022 Marco Miani, Frederik Warburg, Pablo Moreno-Muñoz, Nicke Skafte Detlefsen, Søren Hauberg

In this work, we present a Bayesian autoencoder for unsupervised representation learning, which is trained using a novel variational lower-bound of the autoencoder evidence.

Bayesian Inference Out-of-Distribution Detection +1

Is an encoder within reach?

1 code implementation3 Jun 2022 Helene Hauschultz, Rasmus Berg Palm. Pablo Moreno-Muños, Nicki Skafte Detlefsen, Andrew Allan du Plessis, Søren Hauberg

The encoder network of an autoencoder is an approximation of the nearest point projection onto the manifold spanned by the decoder.

Decoder

Visualizing Riemannian data with Rie-SNE

no code implementations17 Mar 2022 Andri Bergsson, Søren Hauberg

Faithful visualizations of data residing on manifolds must take the underlying geometry into account when producing a flat planar view of the data.

Reactive Motion Generation on Learned Riemannian Manifolds

no code implementations15 Mar 2022 Hadi Beik-Mohammadi, Søren Hauberg, Georgios Arvanitidis, Gerhard Neumann, Leonel Rozo

We argue that Riemannian manifolds may be learned via human demonstrations in which geodesics are natural motion skills.

Motion Generation

Model-agnostic out-of-distribution detection using combined statistical tests

no code implementations2 Mar 2022 Federico Bergamin, Pierre-Alexandre Mattei, Jakob D. Havtorn, Hugo Senetaire, Hugo Schmutz, Lars Maaløe, Søren Hauberg, Jes Frellsen

These techniques, based on classical statistical tests, are model-agnostic in the sense that they can be applied to any differentiable generative model.

Out-of-Distribution Detection

Adaptive Cholesky Gaussian Processes

2 code implementations22 Feb 2022 Simon Bartels, Kristoffer Stensbo-Smidt, Pablo Moreno-Muñoz, Wouter Boomsma, Jes Frellsen, Søren Hauberg

We present a method to approximate Gaussian process regression models for large datasets by considering only a subset of the data.

Gaussian Processes

Benchmarking Generative Latent Variable Models for Speech

1 code implementation22 Feb 2022 Jakob D. Havtorn, Lasse Borgholt, Søren Hauberg, Jes Frellsen, Lars Maaløe

Stochastic latent variable models (LVMs) achieve state-of-the-art performance on natural image generation but are still inferior to deterministic models on speech.

Benchmarking Image Generation +1

Robust uncertainty estimates with out-of-distribution pseudo-inputs training

no code implementations NeurIPS 2021 Pierre Segonne, Yevgen Zainchkovskyy, Søren Hauberg

As one cannot train without data, we provide mechanisms for generating pseudo-inputs in informative low-density regions of the input space, and show how to leverage these in a practical Bayesian framework that casts a prior distribution over the model uncertainty.

Bounds all around: training energy-based models with bidirectional bounds

no code implementations NeurIPS 2021 Cong Geng, Jia Wang, Zhiyong Gao, Jes Frellsen, Søren Hauberg

Energy-based models (EBMs) provide an elegant framework for density estimation, but they are notoriously difficult to train.

Density Estimation

Towards Generative Latent Variable Models for Speech

no code implementations29 Sep 2021 Jakob Drachmann Havtorn, Lasse Borgholt, Jes Frellsen, Søren Hauberg, Lars Maaløe

While stochastic latent variable models (LVMs) now achieve state-of-the-art performance on natural image generation, they are still inferior to deterministic models on speech.

Image Generation Video Generation

Pulling back information geometry

1 code implementation9 Jun 2021 Georgios Arvanitidis, Miguel González-Duque, Alison Pouplin, Dimitris Kalatzis, Søren Hauberg

Latent space geometry has shown itself to provide a rich and rigorous framework for interacting with the latent variables of deep generative models.

Decoder

Learning Riemannian Manifolds for Geodesic Motion Skills

no code implementations8 Jun 2021 Hadi Beik-Mohammadi, Søren Hauberg, Georgios Arvanitidis, Gerhard Neumann, Leonel Rozo

For robots to work alongside humans and perform in unstructured environments, they must learn new motion skills and adapt them to unseen situations on the fly.

Density estimation on smooth manifolds with normalizing flows

no code implementations7 Jun 2021 Dimitris Kalatzis, Johan Ziruo Ye, Alison Pouplin, Jesper Wohlert, Søren Hauberg

We present a framework for learning probability distributions on topologically non-trivial manifolds, utilizing normalizing flows.

Density Estimation

What is a meaningful representation of protein sequences?

1 code implementation28 Nov 2020 Nicki Skafte Detlefsen, Søren Hauberg, Wouter Boomsma

How we choose to represent our data has a fundamental impact on our ability to subsequently extract information from them.

BIG-bench Machine Learning Transfer Learning

Bayesian Triplet Loss: Uncertainty Quantification in Image Retrieval

no code implementations ICCV 2021 Frederik Warburg, Martin Jørgensen, Javier Civera, Søren Hauberg

Uncertainty quantification in image retrieval is crucial for downstream decisions, yet it remains a challenging and largely unexplored problem.

Computational Efficiency Image Retrieval +3

Reparametrization Invariance in non-parametric Causal Discovery

no code implementations12 Aug 2020 Martin Jørgensen, Søren Hauberg

This study investigates one such invariant: the causal relationship between X and Y is invariant to the marginal distributions of X and Y.

Causal Discovery

Geometrically Enriched Latent Spaces

no code implementations2 Aug 2020 Georgios Arvanitidis, Søren Hauberg, Bernhard Schölkopf

A common assumption in generative models is that the generator immerses the latent space into a Euclidean ambient space.

Isometric Gaussian Process Latent Variable Model for Dissimilarity Data

no code implementations21 Jun 2020 Martin Jørgensen, Søren Hauberg

We present a probabilistic model where the latent variable respects both the distances and the topology of the modeled data.

Variational Inference

Probabilistic Spatial Transformer Networks

1 code implementation7 Apr 2020 Pola Schwöbel, Frederik Warburg, Martin Jørgensen, Kristoffer H. Madsen, Søren Hauberg

Spatial Transformer Networks (STNs) estimate image transformations that can improve downstream tasks by `zooming in' on relevant regions in an image.

Data Augmentation Time Series +2

Variational Autoencoders with Riemannian Brownian Motion Priors

no code implementations ICML 2020 Dimitris Kalatzis, David Eklund, Georgios Arvanitidis, Søren Hauberg

Variational Autoencoders (VAEs) represent the given data in a low-dimensional latent space, which is generally assumed to be Euclidean.

Expected path length on random manifolds

no code implementations20 Aug 2019 David Eklund, Søren Hauberg

Manifold learning seeks a low dimensional representation that faithfully captures the essence of data.

Representation Learning

Reliable training and estimation of variance networks

2 code implementations NeurIPS 2019 Nicki S. Detlefsen, Martin Jørgensen, Søren Hauberg

We propose and investigate new complementary methodologies for estimating predictive variance networks in regression neural networks.

Active Learning Gaussian Processes +1

Fast and Robust Shortest Paths on Manifolds Learned from Data

no code implementations22 Jan 2019 Georgios Arvanitidis, Søren Hauberg, Philipp Hennig, Michael Schober

We propose a fast, simple and robust algorithm for computing shortest paths and distances on Riemannian manifolds learned from data.

Metric Learning

Geodesic Clustering in Deep Generative Models

no code implementations13 Sep 2018 Tao Yang, Georgios Arvanitidis, Dongmei Fu, Xiaogang Li, Søren Hauberg

Deep generative models are tremendously successful in learning low-dimensional latent representations that well-describe the data.

Clustering

Only Bayes should learn a manifold (on the estimation of differential geometric structure from data)

no code implementations13 Jun 2018 Søren Hauberg

We investigate learning of the differential geometric structure of a data manifold embedded in a high-dimensional Euclidean space.

Probabilistic Riemannian submanifold learning with wrapped Gaussian process latent variable models

no code implementations23 May 2018 Anton Mallasto, Søren Hauberg, Aasa Feragen

Latent variable models (LVMs) learn probabilistic models of data manifolds lying in an \emph{ambient} Euclidean space.

Uncertainty Quantification

Latent Space Oddity: on the Curvature of Deep Generative Models

1 code implementation ICLR 2018 Georgios Arvanitidis, Lars Kai Hansen, Søren Hauberg

Deep generative models provide a systematic way to learn nonlinear data distributions, through a set of latent variables and a nonlinear "generator" function that maps latent points into the input space.

Clustering

Intrinsic Grassmann Averages for Online Linear, Robust and Nonlinear Subspace Learning

no code implementations3 Feb 2017 Rudrasis Chakraborty, Søren Hauberg, Baba C. Vemuri

In this paper, we present a geometric framework for computing the principal linear subspaces in both situations as well as for the robust PCA case, that amounts to computing the intrinsic average on the space of all subspaces: the Grassmann manifold.

Dimensionality Reduction

A Locally Adaptive Normal Distribution

no code implementations NeurIPS 2016 Georgios Arvanitidis, Lars Kai Hansen, Søren Hauberg

The multivariate normal density is a monotonic function of the distance to the mean, and its ellipsoidal shape is due to the underlying Euclidean metric.

EEG

Dreaming More Data: Class-dependent Distributions over Diffeomorphisms for Learned Data Augmentation

no code implementations9 Oct 2015 Søren Hauberg, Oren Freifeld, Anders Boesen Lindbo Larsen, John W. Fisher III, Lars Kai Hansen

We then learn a class-specific probabilistic generative models of the transformations in a Riemannian submanifold of the Lie group of diffeomorphisms.

Data Augmentation Feature Engineering

Metrics for Probabilistic Geometries

no code implementations27 Nov 2014 Alessandra Tosi, Søren Hauberg, Alfredo Vellido, Neil D. Lawrence

We investigate the geometrical structure of probabilistic generative dimensionality reduction models using the tools of Riemannian geometry.

Dimensionality Reduction

Geodesic Exponential Kernels: When Curvature and Linearity Conflict

no code implementations CVPR 2015 Aasa Feragen, Francois Lauze, Søren Hauberg

However, we show that for spaces with conditionally negative definite distances the geodesic Laplacian kernel can be generalized while retaining positive definiteness.

Probabilistic Solutions to Differential Equations and their Application to Riemannian Statistics

no code implementations3 Jun 2013 Philipp Hennig, Søren Hauberg

We study a probabilistic numerical method for the solution of both boundary and initial value problems that returns a joint Gaussian process posterior over the solution.

A Geometric take on Metric Learning

no code implementations NeurIPS 2012 Søren Hauberg, Oren Freifeld, Michael J. Black

We then show that this structure gives us a principled way to perform dimensionality reduction and regression according to the learned metrics.

Dimensionality Reduction Metric Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.