1 code implementation • 25 Nov 2024 • Maxim Khomiakov, Michael Riis Andersen, Jes Frellsen
In remote sensing there exists a common need for learning scale invariant shapes of objects like buildings.
no code implementations • 10 Nov 2024 • Raul Ortega Ochoa, Tejs Vegge, Jes Frellsen
Enforcing chemical rules in the stories guarantees the chemical validity of the generated molecules, the discrete sequential steps of a molecular story makes the process transparent improving interpretability, and the autoregressive nature of the approach allows the size of the molecule to be a decision of the model.
no code implementations • 4 Oct 2024 • Johannes Kruse, Kasper Lindskow, Saikishore Kalloori, Marco Polignano, Claudio Pomo, Abhishek Srivastava, Anshuk Uppal, Michael Riis Andersen, Jes Frellsen
Personalized content recommendations have been pivotal to the content experience in digital media from video streaming to social networks.
no code implementations • 30 Sep 2024 • Johannes Kruse, Kasper Lindskow, Saikishore Kalloori, Marco Polignano, Claudio Pomo, Abhishek Srivastava, Anshuk Uppal, Michael Riis Andersen, Jes Frellsen
Additionally, the challenge embraces normative complexities, investigating the effects of recommender systems on news flow and their alignment with editorial values.
no code implementations • 22 Aug 2024 • Paul Jeha, Will Grathwohl, Michael Riis Andersen, Carl Henrik Ek, Jes Frellsen
Score-based models, trained with denoising score matching, are remarkably effective in generating high dimensional data.
no code implementations • 23 Jul 2024 • Kilian Zepf, Jes Frellsen, Aasa Feragen
We address the selection and evaluation of uncertain segmentation methods in medical imaging and present two case studies: prostate segmentation, illustrating that for minimal annotator variation simple deterministic models can suffice, and lung lesion segmentation, highlighting the limitations of the Generalized Energy Distance (GED) in model selection.
no code implementations • 19 Jun 2024 • Yarden Cohen, Alexandre Khae Wu Navarro, Jes Frellsen, Richard E. Turner, Raziel Riemer, Ari Pakman
The need for regression models to predict circular values arises in many scientific fields.
1 code implementation • 7 May 2024 • Berian James, Stefan Pollok, Ignacio Peis, Jes Frellsen, Rasmus Bjørk
We present a generative model that amortises computation for the field around e. g. gravitational or magnetic sources.
1 code implementation • 26 Apr 2024 • Richard Michael, Simon Bartels, Miguel González-Duque, Yevgen Zainchkovskyy, Jes Frellsen, Søren Hauberg, Wouter Boomsma
To optimize efficiently over discrete data and with only few available target observations is a challenge in Bayesian optimization.
no code implementations • 8 Apr 2023 • Maxim Khomiakov, Michael Riis Andersen, Jes Frellsen
In geospatial planning, it is often essential to represent objects in a vectorized format, as this format easily translates to downstream tasks such as web development, graphics, or design.
no code implementations • 28 Mar 2023 • Kilian Zepf, Eike Petersen, Jes Frellsen, Aasa Feragen
Segmentation uncertainty models predict a distribution over plausible segmentations for a given input, which they learn from the annotator variation in the training set.
1 code implementation • 23 Mar 2023 • Kilian Zepf, Selma Wanna, Marco Miani, Juston Moore, Jes Frellsen, Søren Hauberg, Frederik Warburg, Aasa Feragen
Image segmentation relies heavily on neural networks which are known to be overconfident, especially when making predictions on out-of-distribution (OOD) images.
no code implementations • 20 Mar 2023 • Maxim Khomiakov, Alejandro Valverde Mahou, Alba Reinders Sánchez, Jes Frellsen, Michael Riis Andersen
We present a novel pipeline for learning the conditional distribution of a building roof mesh given pixels from an aerial image, under the assumption that roof geometry follows a set of regular patterns.
no code implementations • 27 Feb 2023 • Marloes Arts, Jes Frellsen, Wouter Boomsma
After the recent ground-breaking advances in protein structure prediction, one of the remaining challenges in protein machine learning is to reliably predict distributions of structural states.
no code implementations • 6 Dec 2022 • Hugo Henri Joseph Senetaire, Damien Garreau, Jes Frellsen, Pierre-Alexandre Mattei
The model parameters can be learned via maximum likelihood, and the method can be adapted to any predictor network architecture and any type of prediction problem.
no code implementations • 2 Dec 2022 • Maxim Khomiakov, Julius Holbech Radzikowski, Carl Anton Schmidt, Mathias Bonde Sørensen, Mads Andersen, Michael Riis Andersen, Jes Frellsen
The body of research on classification of solar panel arrays from aerial imagery is increasing, yet there are still not many public benchmark datasets.
1 code implementation • 20 Oct 2022 • Dennis Ulmer, Jes Frellsen, Christian Hardmeier
We investigate the problem of determining the predictive confidence (or, conversely, uncertainty) of a neural classifier through the lens of low-resource languages.
1 code implementation • 14 Apr 2022 • Dennis Ulmer, Christian Hardmeier, Jes Frellsen
A lot of Machine Learning (ML) and Deep Learning (DL) research is of an empirical nature.
no code implementations • 2 Mar 2022 • Federico Bergamin, Pierre-Alexandre Mattei, Jakob D. Havtorn, Hugo Senetaire, Hugo Schmutz, Lars Maaløe, Søren Hauberg, Jes Frellsen
These techniques, based on classical statistical tests, are model-agnostic in the sense that they can be applied to any differentiable generative model.
2 code implementations • 22 Feb 2022 • Simon Bartels, Kristoffer Stensbo-Smidt, Pablo Moreno-Muñoz, Wouter Boomsma, Jes Frellsen, Søren Hauberg
We present a method to approximate Gaussian process regression models for large datasets by considering only a subset of the data.
1 code implementation • 22 Feb 2022 • Jakob D. Havtorn, Lasse Borgholt, Søren Hauberg, Jes Frellsen, Lars Maaløe
Stochastic latent variable models (LVMs) achieve state-of-the-art performance on natural image generation but are still inferior to deterministic models on speech.
no code implementations • 26 Jan 2022 • Pierre-Alexandre Mattei, Jes Frellsen
Inspired by this simple monotonicity theorem, we present a series of nonasymptotic results that link properties of Monte Carlo estimates to tightness of MCOs.
no code implementations • NeurIPS 2021 • Cong Geng, Jia Wang, Zhiyong Gao, Jes Frellsen, Søren Hauberg
Energy-based models (EBMs) provide an elegant framework for density estimation, but they are notoriously difficult to train.
no code implementations • 6 Oct 2021 • Dennis Ulmer, Christian Hardmeier, Jes Frellsen
Popular approaches for quantifying predictive uncertainty in deep neural networks often involve distributions over weights or multiple models, for instance via Markov Chain sampling, ensembling, or Monte Carlo dropout.
no code implementations • ICLR 2022 • Niels Bruun Ipsen, Pierre-Alexandre Mattei, Jes Frellsen
To address supervised deep learning with missing values, we propose to marginalize over missing values in a joint model of covariates and outcomes.
no code implementations • 29 Sep 2021 • Jakob Drachmann Havtorn, Lasse Borgholt, Jes Frellsen, Søren Hauberg, Lars Maaløe
While stochastic latent variable models (LVMs) now achieve state-of-the-art performance on natural image generation, they are still inferior to deterministic models on speech.
4 code implementations • 16 Feb 2021 • Jakob D. Havtorn, Jes Frellsen, Søren Hauberg, Lars Maaløe
Deep generative models have been demonstrated as state-of-the-art density estimators.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
1 code implementation • 12 Feb 2021 • Samuel Wiqvist, Jes Frellsen, Umberto Picchini
We introduce the sequential neural posterior and likelihood approximation (SNPLA) algorithm.
1 code implementation • ICLR 2021 • Niels Bruun Ipsen, Pierre-Alexandre Mattei, Jes Frellsen
When a missing process depends on the missing values themselves, it needs to be explicitly modelled and taken into account while doing likelihood-based inference.
1 code implementation • 10 Feb 2019 • Anton Mallasto, Jes Frellsen, Wouter Boomsma, Aasa Feragen
We contribute to the WGAN literature by introducing the family of $(q, p)$-Wasserstein GANs, which allow the use of more general $p$-Wasserstein metrics for $p\geq 1$ in the GAN learning procedure.
1 code implementation • 29 Jan 2019 • Samuel Wiqvist, Pierre-Alexandre Mattei, Umberto Picchini, Jes Frellsen
We present a novel family of deep neural architectures, named partially exchangeable networks (PENs) that leverage probabilistic symmetries.
no code implementations • 6 Dec 2018 • Pierre-Alexandre Mattei, Jes Frellsen
Our approach, called MIWAE, is based on the importance-weighted autoencoder (IWAE), and maximises a potentially tight lower bound of the log-likelihood of the observed data.
no code implementations • NeurIPS 2018 • Pierre-Alexandre Mattei, Jes Frellsen
Finally, we describe an algorithm for missing data imputation using the exact conditional likelihood of a deep latent variable model.
no code implementations • NeurIPS 2017 • Wouter Boomsma, Jes Frellsen
We show that the models are capable of learning non-trivial functions in these molecular environments, and that our spherical convolutions generally outperform standard 3D convolutions in this setting.
1 code implementation • 13 Jul 2017 • Thomas Brouwer, Jes Frellsen, Pietro Lió
In this paper, we study the trade-offs of different inference approaches for Bayesian matrix factorisation methods, which are commonly used for predicting missing values, and for finding patterns in the data.
1 code implementation • 26 Oct 2016 • Thomas Brouwer, Jes Frellsen, Pietro Lio'
We present a fast variational Bayesian algorithm for performing non-negative matrix factorisation and tri-factorisation.
no code implementations • 16 Feb 2016 • Alexandre K. W. Navarro, Jes Frellsen, Richard E. Turner
First we introduce a new multivariate distribution over circular variables, called the multivariate Generalised von Mises (mGvM) distribution.