You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 17 Oct 2023 • Quentin Bouniot, Ievgen Redko, Anton Mallasto, Charlotte Laclau, Karol Arndt, Oliver Struckmeier, Markus Heinonen, Ville Kyrki, Samuel Kaski

The remarkable success of deep neural networks (DNN) is often attributed to their high expressive power and their ability to approximate functions of arbitrary complexity.

1 code implementation • 9 Jul 2023 • Valerii Iakovlev, Markus Heinonen, Harri Lähdesmäki

We introduce a novel grid-independent model for learning partial differential equations (PDEs) from noisy and partial observations on irregular spatiotemporal grids.

no code implementations • 5 Jun 2023 • Trung Trinh, Markus Heinonen, Luigi Acerbi, Samuel Kaski

Deep Ensembles (DEs) demonstrate improved accuracy, calibration and robustness to perturbations over single neural networks partly due to their functional diversity.

no code implementations • 31 May 2023 • Yogesh Verma, Markus Heinonen, Vikas Garg

Antibodies are Y-shaped proteins that neutralize pathogens and constitute the core of our adaptive immune system.

1 code implementation • 12 May 2023 • Oliver Struckmeier, Ievgen Redko, Anton Mallasto, Karol Arndt, Markus Heinonen, Ville Kyrki

Optimal transport (OT) is a powerful geometric tool used to compare and align probability measures following the least effort principle.

1 code implementation • 10 Apr 2023 • Aarne Talman, Hande Celikkanat, Sami Virpioja, Markus Heinonen, Jörg Tiedemann

This paper introduces Bayesian uncertainty modeling using Stochastic Weight Averaging-Gaussian (SWAG) in Natural Language Understanding (NLU) tasks.

1 code implementation • 3 Mar 2023 • Magnus Ross, Markus Heinonen

Hamiltonian mechanics is one of the cornerstones of natural sciences.

no code implementations • 12 Oct 2022 • Yogesh Verma, Samuel Kaski, Markus Heinonen, Vikas Garg

Generating new molecules is fundamental to advancing critical applications such as drug discovery and material synthesis.

1 code implementation • 7 Oct 2022 • Valerii Iakovlev, Cagatay Yildiz, Markus Heinonen, Harri Lähdesmäki

Training dynamic models, such as neural ODEs, on long trajectories is a hard problem that requires using various tricks, such as trajectory splitting, to make model training work in practice.

1 code implementation • 4 Jul 2022 • Vishnu Raj, Tianyu Cui, Markus Heinonen, Pekka Marttinen

We present a simple approach to incorporate prior knowledge in BNNs based on external summary information about the predicted classification probabilities for a given dataset.

1 code implementation • 21 Jun 2022 • Severi Rissanen, Markus Heinonen, Arno Solin

While diffusion models have shown great success in image generation, their noise-inverting generative process does not explicitly consider the structure of images, such as their inherent multi-scale nature.

1 code implementation • 6 Jun 2022 • Trung Trinh, Markus Heinonen, Luigi Acerbi, Samuel Kaski

In this paper, we interpret these latent noise variables as implicit representations of simple and domain-agnostic data perturbations during training, producing BNNs that perform well under covariate shift due to input corruptions.

no code implementations • NeurIPS 2021 • Zheyang Shen, Markus Heinonen, Samuel Kaski

Parallel to LD, Stein variational gradient descent (SVGD) similarly minimizes the KL, albeit endowed with a novel Stein-Wasserstein distance, by deterministically transporting a set of particle samples, thus de-randomizes the stochastic diffusion process.

no code implementations • 29 Sep 2021 • Valerii Iakovlev, Markus Heinonen, Harri Lähdesmäki

Data-driven neural network models have recently shown great success in modelling and learning complex PDE systems.

1 code implementation • 29 Jun 2021 • David Blanco-Mulero, Markus Heinonen, Ville Kyrki

Graph Gaussian Processes (GGPs) provide a data-efficient solution on graph structured domains.

1 code implementation • 21 Jun 2021 • Pashupati Hegde, Çağatay Yıldız, Harri Lähdesmäki, Samuel Kaski, Markus Heinonen

Recent machine learning advances have proposed black-box estimation of unknown continuous-time system dynamics directly from data.

no code implementations • 25 May 2021 • Anton Mallasto, Karol Arndt, Markus Heinonen, Samuel Kaski, Ville Kyrki

In this paper, we present affine transport -- a variant of optimal transport, which models the mapping between state transition distributions between the source and target domains with an affine transformation.

1 code implementation • 9 Feb 2021 • Çağatay Yıldız, Markus Heinonen, Harri Lähdesmäki

Model-based reinforcement learning (MBRL) approaches rely on discrete-time state transition models whereas physical systems and the vast majority of control tasks operate in continuous-time.

Model-based Reinforcement Learning
reinforcement-learning
**+1**

no code implementations • 2 Nov 2020 • Charles Gadd, Markus Heinonen, Harri Lähdesmäki, Samuel Kaski

In model-based reinforcement learning efficiency is improved by learning to simulate the world dynamics.

1 code implementation • 26 Oct 2020 • Trung Trinh, Samuel Kaski, Markus Heinonen

We introduce implicit Bayesian neural networks, a simple and scalable approach for uncertainty representation in deep learning.

no code implementations • 19 Oct 2020 • Anton Mallasto, Markus Heinonen, Samuel Kaski

In machine learning and computer vision, optimal transport has had significant success in learning generative models and defining metric distances between structured and stochastic data objects, that can be cast as probability measures.

1 code implementation • 18 Jun 2020 • Alexander Aushev, Henri Pesonen, Markus Heinonen, Jukka Corander, Samuel Kaski

In recent years, surrogate models have been successfully used in likelihood-free inference to decrease the number of simulator evaluations.

1 code implementation • ICLR 2021 • Valerii Iakovlev, Markus Heinonen, Harri Lähdesmäki

We demonstrate the model's ability to work with unstructured grids, arbitrary time steps, and noisy observations.

no code implementations • 6 Mar 2020 • Simone Rossi, Markus Heinonen, Edwin V. Bonilla, Zheyang Shen, Maurizio Filippone

Variational inference techniques based on inducing variables provide an elegant framework for scalable posterior estimation in Gaussian process (GP) models.

1 code implementation • NeurIPS 2019 • Cagatay Yildiz, Markus Heinonen, Harri Lahdesmaki

We present Ordinary Differential Equation Variational Auto-Encoder (ODE2VAE), a latent second order ODE model for high-dimensional sequential data.

1 code implementation • 27 May 2019 • Çağatay Yıldız, Markus Heinonen, Harri Lähdesmäki

We present Ordinary Differential Equation Variational Auto-Encoder (ODE$^2$VAE), a latent second order ODE model for high-dimensional sequential data.

Ranked #1 on Video Prediction on CMU Mocap-1

no code implementations • 23 May 2019 • Zheyang Shen, Markus Heinonen, Samuel Kaski

We introduce the convolutional spectral kernel (CSK), a novel family of non-stationary, nonparametric covariance kernels for Gaussian process (GP) models, derived from the convolution between two imaginary radial basis functions.

1 code implementation • 27 Nov 2018 • Sami Remes, Markus Heinonen, Samuel Kaski

Spectral mixture kernels have been proposed as general-purpose, flexible kernels for learning and discovering more complicated patterns in the data.

no code implementations • 10 Oct 2018 • Zheyang Shen, Markus Heinonen, Samuel Kaski

The expressive power of Gaussian processes depends heavily on the choice of kernel.

1 code implementation • 9 Oct 2018 • Pashupati Hegde, Markus Heinonen, Harri Lähdesmäki, Samuel Kaski

We propose a novel deep learning paradigm of differential flows that learn a stochastic differential equation transformations of inputs prior to a standard classification or regression function.

1 code implementation • 6 Oct 2018 • Kenneth Blomqvist, Samuel Kaski, Markus Heinonen

We propose deep convolutional Gaussian processes, a deep Gaussian process architecture with convolutional structure.

1 code implementation • 16 Jul 2018 • Cagatay Yildiz, Markus Heinonen, Jukka Intosalmi, Henrik Mannerström, Harri Lähdesmäki

We introduce a novel paradigm for learning non-parametric drift and diffusion functions for stochastic differential equation (SDE).

1 code implementation • 18 Apr 2018 • Markus Heinonen, Maria Osmala, Henrik Mannerström, Janne Wallenius, Samuel Kaski, Juho Rousu, Harri Lähdesmäki

Flux analysis methods commonly place unrealistic assumptions on fluxes due to the convenience of formulating the problem as a linear programming model, and most methods ignore the notable uncertainty in flux estimates.

1 code implementation • 13 Mar 2018 • Pashupati Hegde, Markus Heinonen, Samuel Kaski

We propose a novel model family of zero-inflated Gaussian processes (ZiGP) for such zero-inflated datasets, produced by sparse kernels through learning a latent probit Gaussian process that can zero out kernel rows and columns whenever the signal is absent.

2 code implementations • ICML 2018 • Markus Heinonen, Cagatay Yildiz, Henrik Mannerström, Jukka Intosalmi, Harri Lähdesmäki

In conventional ODE modelling coefficients of an equation driving the system state forward in time are estimated.

1 code implementation • 8 Feb 2018 • Emmi Jokinen, Markus Heinonen, Harri Lähdesmäki

We introduce a Bayesian data fusion model that re-calibrates the experimental and in silico data sources and then learns a predictive GP model from the combined data.

1 code implementation • NeurIPS 2017 • Sami Remes, Markus Heinonen, Samuel Kaski

We propose non-stationary spectral kernels for Gaussian process regression.

1 code implementation • 27 Feb 2017 • Sami Remes, Markus Heinonen, Samuel Kaski

We introduce a novel kernel that models input-dependent couplings across multiple latent processes.

no code implementations • 9 May 2016 • Romain Brault, Florence d'Alché-Buc, Markus Heinonen

Devoted to multi-task learning and structured output learning, operator-valued kernels provide a flexible tool to build vector-valued functions in the context of Reproducing Kernel Hilbert Spaces.

1 code implementation • 18 Aug 2015 • Markus Heinonen, Henrik Mannerström, Juho Rousu, Samuel Kaski, Harri Lähdesmäki

We present a novel approach for fully non-stationary Gaussian process regression (GPR), where all three key parameters -- noise variance, signal variance and lengthscale -- can be simultaneously input-dependent.

no code implementations • 19 Nov 2014 • Markus Heinonen, Florence d'Alché-Buc

Modeling dynamical systems with ordinary differential equations implies a mechanistic view of the process underlying the dynamics.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.