no code implementations • 2 Nov 2024 • Rafał Karczewski, Markus Heinonen, Vikas Garg
We investigate what kind of images lie in the high-density regions of diffusion models.
no code implementations • 15 Oct 2024 • Severi Rissanen, Markus Heinonen, Arno Solin
The covariance for clean data given a noisy observation is an important quantity in many conditional generation methods for diffusion models.
no code implementations • 12 Aug 2024 • Rafał Karczewski, Samuel Kaski, Markus Heinonen, Vikas Garg
Several generative models with elaborate training and sampling procedures have been proposed recently to accelerate structure-based drug design (SBDD); however, perplexingly, their empirical performance turns out to be suboptimal.
1 code implementation • 24 Jun 2024 • Trung Trinh, Markus Heinonen, Luigi Acerbi, Samuel Kaski
Deep neural networks (DNNs) excel on clean images but struggle with corrupted ones.
no code implementations • 3 Jun 2024 • Markus Heinonen, Ba-Hien Tran, Michael Kampffmeyer, Maurizio Filippone
Introducing training-time augmentations is a key technique to enhance generalization and prepare deep neural networks against test-time corruptions.
no code implementations • 28 May 2024 • Severi Rissanen, Markus Heinonen, Arno Solin
In the domains of image and audio, diffusion models have shown impressive performance.
no code implementations • 27 May 2024 • Najwa Laabid, Severi Rissanen, Markus Heinonen, Arno Solin, Vikas Garg
Retrosynthesis, the task of identifying precursors for a given molecule, can be naturally framed as a conditional graph generation task.
1 code implementation • 15 Apr 2024 • Yogesh Verma, Markus Heinonen, Vikas Garg
Climate and weather prediction traditionally relies on complex numerical simulations of atmospheric physics.
no code implementations • 24 Feb 2024 • Alexandru Dumitrescu, Dani Korpela, Markus Heinonen, Yogesh Verma, Valerii Iakovlev, Vikas Garg, Harri Lähdesmäki
This work introduces FMG, a field-based model for drug-like molecule generation.
1 code implementation • 17 Oct 2023 • Quentin Bouniot, Ievgen Redko, Anton Mallasto, Charlotte Laclau, Karol Arndt, Oliver Struckmeier, Markus Heinonen, Ville Kyrki, Samuel Kaski
In the last decade, we have witnessed the introduction of several novel deep neural network (DNN) architectures exhibiting ever-increasing performance across diverse tasks.
1 code implementation • 9 Jul 2023 • Valerii Iakovlev, Markus Heinonen, Harri Lähdesmäki
We introduce a novel grid-independent model for learning partial differential equations (PDEs) from noisy and partial observations on irregular spatiotemporal grids.
1 code implementation • 5 Jun 2023 • Trung Trinh, Markus Heinonen, Luigi Acerbi, Samuel Kaski
To sidestep these difficulties, we propose First-order Repulsive Deep Ensemble (FoRDE), an ensemble learning method based on ParVI, which performs repulsion in the space of first-order input gradients.
no code implementations • 31 May 2023 • Yogesh Verma, Markus Heinonen, Vikas Garg
Antibodies are Y-shaped proteins that neutralize pathogens and constitute the core of our adaptive immune system.
1 code implementation • 12 May 2023 • Oliver Struckmeier, Ievgen Redko, Anton Mallasto, Karol Arndt, Markus Heinonen, Ville Kyrki
Optimal transport (OT) is a powerful geometric tool used to compare and align probability measures following the least effort principle.
1 code implementation • 10 Apr 2023 • Aarne Talman, Hande Celikkanat, Sami Virpioja, Markus Heinonen, Jörg Tiedemann
This paper introduces Bayesian uncertainty modeling using Stochastic Weight Averaging-Gaussian (SWAG) in Natural Language Understanding (NLU) tasks.
1 code implementation • 3 Mar 2023 • Magnus Ross, Markus Heinonen
Hamiltonian mechanics is one of the cornerstones of natural sciences.
1 code implementation • NeurIPS 2023 • Giulio Franzese, Giulio Corallo, Simone Rossi, Markus Heinonen, Maurizio Filippone, Pietro Michiardi
We introduce Functional Diffusion Processes (FDPs), which generalize score-based diffusion models to infinite-dimensional function spaces.
Ranked #26 on Image Generation on CelebA 64x64
no code implementations • 12 Oct 2022 • Yogesh Verma, Samuel Kaski, Markus Heinonen, Vikas Garg
Generating new molecules is fundamental to advancing critical applications such as drug discovery and material synthesis.
1 code implementation • 7 Oct 2022 • Valerii Iakovlev, Cagatay Yildiz, Markus Heinonen, Harri Lähdesmäki
Training dynamic models, such as neural ODEs, on long trajectories is a hard problem that requires using various tricks, such as trajectory splitting, to make model training work in practice.
1 code implementation • 4 Jul 2022 • Vishnu Raj, Tianyu Cui, Markus Heinonen, Pekka Marttinen
We present a simple approach to incorporate prior knowledge in BNNs based on external summary information about the predicted classification probabilities for a given dataset.
1 code implementation • 21 Jun 2022 • Severi Rissanen, Markus Heinonen, Arno Solin
While diffusion models have shown great success in image generation, their noise-inverting generative process does not explicitly consider the structure of images, such as their inherent multi-scale nature.
1 code implementation • 6 Jun 2022 • Trung Trinh, Markus Heinonen, Luigi Acerbi, Samuel Kaski
In this paper, we interpret these latent noise variables as implicit representations of simple and domain-agnostic data perturbations during training, producing BNNs that perform well under covariate shift due to input corruptions.
no code implementations • NeurIPS 2021 • Zheyang Shen, Markus Heinonen, Samuel Kaski
Parallel to LD, Stein variational gradient descent (SVGD) similarly minimizes the KL, albeit endowed with a novel Stein-Wasserstein distance, by deterministically transporting a set of particle samples, thus de-randomizes the stochastic diffusion process.
no code implementations • 29 Sep 2021 • Valerii Iakovlev, Markus Heinonen, Harri Lähdesmäki
Data-driven neural network models have recently shown great success in modelling and learning complex PDE systems.
1 code implementation • 29 Jun 2021 • David Blanco-Mulero, Markus Heinonen, Ville Kyrki
Graph Gaussian Processes (GGPs) provide a data-efficient solution on graph structured domains.
1 code implementation • 21 Jun 2021 • Pashupati Hegde, Çağatay Yıldız, Harri Lähdesmäki, Samuel Kaski, Markus Heinonen
Recent machine learning advances have proposed black-box estimation of unknown continuous-time system dynamics directly from data.
no code implementations • 25 May 2021 • Anton Mallasto, Karol Arndt, Markus Heinonen, Samuel Kaski, Ville Kyrki
In this paper, we present affine transport -- a variant of optimal transport, which models the mapping between state transition distributions between the source and target domains with an affine transformation.
1 code implementation • 9 Feb 2021 • Çağatay Yıldız, Markus Heinonen, Harri Lähdesmäki
Model-based reinforcement learning (MBRL) approaches rely on discrete-time state transition models whereas physical systems and the vast majority of control tasks operate in continuous-time.
Model-based Reinforcement Learning reinforcement-learning +2
no code implementations • 2 Nov 2020 • Charles Gadd, Markus Heinonen, Harri Lähdesmäki, Samuel Kaski
In model-based reinforcement learning efficiency is improved by learning to simulate the world dynamics.
1 code implementation • 26 Oct 2020 • Trung Trinh, Samuel Kaski, Markus Heinonen
We introduce implicit Bayesian neural networks, a simple and scalable approach for uncertainty representation in deep learning.
no code implementations • 19 Oct 2020 • Anton Mallasto, Markus Heinonen, Samuel Kaski
In machine learning and computer vision, optimal transport has had significant success in learning generative models and defining metric distances between structured and stochastic data objects, that can be cast as probability measures.
1 code implementation • 18 Jun 2020 • Alexander Aushev, Henri Pesonen, Markus Heinonen, Jukka Corander, Samuel Kaski
In recent years, surrogate models have been successfully used in likelihood-free inference to decrease the number of simulator evaluations.
1 code implementation • ICLR 2021 • Valerii Iakovlev, Markus Heinonen, Harri Lähdesmäki
We demonstrate the model's ability to work with unstructured grids, arbitrary time steps, and noisy observations.
no code implementations • 6 Mar 2020 • Simone Rossi, Markus Heinonen, Edwin V. Bonilla, Zheyang Shen, Maurizio Filippone
Variational inference techniques based on inducing variables provide an elegant framework for scalable posterior estimation in Gaussian process (GP) models.
1 code implementation • NeurIPS 2019 • Cagatay Yildiz, Markus Heinonen, Harri Lahdesmaki
We present Ordinary Differential Equation Variational Auto-Encoder (ODE2VAE), a latent second order ODE model for high-dimensional sequential data.
1 code implementation • 27 May 2019 • Çağatay Yıldız, Markus Heinonen, Harri Lähdesmäki
We present Ordinary Differential Equation Variational Auto-Encoder (ODE$^2$VAE), a latent second order ODE model for high-dimensional sequential data.
Ranked #1 on Video Prediction on CMU Mocap-1
no code implementations • 23 May 2019 • Zheyang Shen, Markus Heinonen, Samuel Kaski
We introduce the convolutional spectral kernel (CSK), a novel family of non-stationary, nonparametric covariance kernels for Gaussian process (GP) models, derived from the convolution between two imaginary radial basis functions.
1 code implementation • 27 Nov 2018 • Sami Remes, Markus Heinonen, Samuel Kaski
Spectral mixture kernels have been proposed as general-purpose, flexible kernels for learning and discovering more complicated patterns in the data.
no code implementations • 10 Oct 2018 • Zheyang Shen, Markus Heinonen, Samuel Kaski
The expressive power of Gaussian processes depends heavily on the choice of kernel.
1 code implementation • 9 Oct 2018 • Pashupati Hegde, Markus Heinonen, Harri Lähdesmäki, Samuel Kaski
We propose a novel deep learning paradigm of differential flows that learn a stochastic differential equation transformations of inputs prior to a standard classification or regression function.
1 code implementation • 6 Oct 2018 • Kenneth Blomqvist, Samuel Kaski, Markus Heinonen
We propose deep convolutional Gaussian processes, a deep Gaussian process architecture with convolutional structure.
1 code implementation • 16 Jul 2018 • Cagatay Yildiz, Markus Heinonen, Jukka Intosalmi, Henrik Mannerström, Harri Lähdesmäki
We introduce a novel paradigm for learning non-parametric drift and diffusion functions for stochastic differential equation (SDE).
1 code implementation • 18 Apr 2018 • Markus Heinonen, Maria Osmala, Henrik Mannerström, Janne Wallenius, Samuel Kaski, Juho Rousu, Harri Lähdesmäki
Flux analysis methods commonly place unrealistic assumptions on fluxes due to the convenience of formulating the problem as a linear programming model, and most methods ignore the notable uncertainty in flux estimates.
1 code implementation • 13 Mar 2018 • Pashupati Hegde, Markus Heinonen, Samuel Kaski
We propose a novel model family of zero-inflated Gaussian processes (ZiGP) for such zero-inflated datasets, produced by sparse kernels through learning a latent probit Gaussian process that can zero out kernel rows and columns whenever the signal is absent.
2 code implementations • ICML 2018 • Markus Heinonen, Cagatay Yildiz, Henrik Mannerström, Jukka Intosalmi, Harri Lähdesmäki
In conventional ODE modelling coefficients of an equation driving the system state forward in time are estimated.
1 code implementation • 8 Feb 2018 • Emmi Jokinen, Markus Heinonen, Harri Lähdesmäki
We introduce a Bayesian data fusion model that re-calibrates the experimental and in silico data sources and then learns a predictive GP model from the combined data.
1 code implementation • NeurIPS 2017 • Sami Remes, Markus Heinonen, Samuel Kaski
We propose non-stationary spectral kernels for Gaussian process regression.
1 code implementation • 27 Feb 2017 • Sami Remes, Markus Heinonen, Samuel Kaski
We introduce a novel kernel that models input-dependent couplings across multiple latent processes.
no code implementations • 9 May 2016 • Romain Brault, Florence d'Alché-Buc, Markus Heinonen
Devoted to multi-task learning and structured output learning, operator-valued kernels provide a flexible tool to build vector-valued functions in the context of Reproducing Kernel Hilbert Spaces.
1 code implementation • 18 Aug 2015 • Markus Heinonen, Henrik Mannerström, Juho Rousu, Samuel Kaski, Harri Lähdesmäki
We present a novel approach for fully non-stationary Gaussian process regression (GPR), where all three key parameters -- noise variance, signal variance and lengthscale -- can be simultaneously input-dependent.
no code implementations • 19 Nov 2014 • Markus Heinonen, Florence d'Alché-Buc
Modeling dynamical systems with ordinary differential equations implies a mechanistic view of the process underlying the dynamics.