Search Results for author: Marc Peter Deisenroth

Found 66 papers, 28 papers with code

Probabilistic Weather Forecasting with Hierarchical Graph Neural Networks

1 code implementation7 Jun 2024 Joel Oskarsson, Tomas Landelius, Marc Peter Deisenroth, Fredrik Lindsten

While most current machine learning models focus on deterministic forecasts, accurately capturing the uncertainty in the chaotic weather system calls for probabilistic modeling.

graph construction Weather Forecasting

Scalable Data Assimilation with Message Passing

1 code implementation19 Apr 2024 Oscar Key, So Takao, Daniel Giles, Marc Peter Deisenroth

Data assimilation is a core component of numerical weather prediction systems.

Bayesian Inference

Iterated INLA for State and Parameter Estimation in Nonlinear Dynamical Systems

1 code implementation26 Feb 2024 Rafael Anderka, Marc Peter Deisenroth, So Takao

Data assimilation (DA) methods use priors arising from differential equations to robustly interpolate and extrapolate data.

Gaussian Processes on Cellular Complexes

no code implementations2 Nov 2023 Mathieu Alain, So Takao, Brooks Paige, Marc Peter Deisenroth

In this paper, we go beyond this dyadic setting and consider polyadic relations that include interactions between vertices, edges and one of their generalisations, known as cells.

Gaussian Processes

A Unifying Variational Framework for Gaussian Process Motion Planning

1 code implementation2 Sep 2023 Lucas Cosier, Rares Iordan, Sicelukwanda Zwane, Giovanni Franzese, James T. Wilson, Marc Peter Deisenroth, Alexander Terenin, Yasemin Bekiroglu

To control how a robot moves, motion planning algorithms must compute paths in high-dimensional state spaces while accounting for physical constraints related to motors and joints, generating smooth and stable motions, avoiding obstacles, and preventing collisions.

Gaussian Processes Motion Planning

Faster Training of Neural ODEs Using Gauß-Legendre Quadrature

1 code implementation21 Aug 2023 Alexander Norcliffe, Marc Peter Deisenroth

In this paper, we propose an alternative way to speed up the training of neural ODEs.

Time Series

Grasp Transfer based on Self-Aligning Implicit Representations of Local Surfaces

no code implementations15 Aug 2023 Ahmet Tekden, Marc Peter Deisenroth, Yasemin Bekiroglu

This work addresses the problem of transferring a grasp experience or a demonstration to a novel object that shares shape similarities with objects the robot has previously encountered.

Object

On Combining Expert Demonstrations in Imitation Learning via Optimal Transport

no code implementations20 Jul 2023 Ilana Sebag, samuel cohen, Marc Peter Deisenroth

One of the key approaches to IL is to define a distance between agent and expert and to find an agent policy that minimizes that distance.

Imitation Learning OpenAI Gym

Implicit regularisation in stochastic gradient descent: from single-objective to two-player games

no code implementations11 Jul 2023 Mihaela Rosca, Marc Peter Deisenroth

In this work, we provide a novel approach to use BEA, and show how our approach can be used to construct continuous-time flows with vector fields that can be written as gradients.

Investigating the Edge of Stability Phenomenon in Reinforcement Learning

no code implementations9 Jul 2023 Rares Iordan, Marc Peter Deisenroth, Mihaela Rosca

Recent progress has been made in understanding optimisation dynamics in neural networks trained with full-batch gradient descent with momentum with the uncovering of the edge of stability phenomenon in supervised learning.

Q-Learning reinforcement-learning +1

Actually Sparse Variational Gaussian Processes

1 code implementation11 Apr 2023 Harry Jake Cunningham, Daniel Augusto de Souza, So Takao, Mark van der Wilk, Marc Peter Deisenroth

For large datasets, sparse GPs reduce these demands by conditioning on a small set of inducing variables designed to summarise the data.

Gaussian Processes

Finetuning from Offline Reinforcement Learning: Challenges, Trade-offs and Practical Solutions

no code implementations30 Mar 2023 Yicheng Luo, Jackie Kay, Edward Grefenstette, Marc Peter Deisenroth

While offline RL algorithms can in principle be used for finetuning, in practice, their online performance improves slowly.

Diversity Offline RL +2

Optimal Transport for Offline Imitation Learning

1 code implementation24 Mar 2023 Yicheng Luo, Zhengyao Jiang, samuel cohen, Edward Grefenstette, Marc Peter Deisenroth

In this paper, we introduce Optimal Transport Reward labeling (OTR), an algorithm that assigns rewards to offline trajectories, with a few high-quality demonstrations.

D4RL Imitation Learning +2

Vector-valued Gaussian Processes on Riemannian Manifolds via Gauge Independent Projected Kernels

no code implementations NeurIPS 2021 Michael Hutchinson, Alexander Terenin, Viacheslav Borovitskiy, So Takao, Yee Whye Teh, Marc Peter Deisenroth

Gaussian processes are machine learning models capable of learning unknown functions in a way that represents uncertainty, thereby facilitating construction of optimal decision-making systems.

BIG-bench Machine Learning Decision Making +2

Gaussian Process Sampling and Optimization with Approximate Upper and Lower Bounds

no code implementations22 Oct 2021 Vu Nguyen, Marc Peter Deisenroth, Michael A. Osborne

More specifically, we propose the first use of such bounds to improve Gaussian process (GP) posterior sampling and Bayesian optimization (BO).

Bayesian Optimization

Learning to Transfer: A Foliated Theory

no code implementations22 Jul 2021 Janith Petangoda, Marc Peter Deisenroth, Nicholas A. M. Monk

Learning to transfer considers learning solutions to tasks in a such way that relevant knowledge can be transferred from known task solutions to new, related tasks.

The Graph Cut Kernel for Ranked Data

1 code implementation26 May 2021 Michelangelo Conserva, Marc Peter Deisenroth, K S Sesh Kumar

Many algorithms for ranked data become computationally intractable as the number of objects grows due to the complex geometric structure induced by rankings.

Recommendation Systems

Learning Contact Dynamics using Physically Structured Neural Networks

1 code implementation22 Feb 2021 Andreas Hochlehnert, Alexander Terenin, Steindór Sæmundsson, Marc Peter Deisenroth

Learning physically structured representations of dynamical systems that include contact between different objects is an important problem for learning-based approaches in robotics.

Sliced Multi-Marginal Optimal Transport

no code implementations14 Feb 2021 samuel cohen, Alexander Terenin, Yannik Pitcan, Brandon Amos, Marc Peter Deisenroth, K S Sesh Kumar

To construct this distance, we introduce a characterization of the one-dimensional multi-marginal Kantorovich problem and use it to highlight a number of properties of the sliced multi-marginal Wasserstein distance.

Density Estimation Multi-Task Learning

Healing Products of Gaussian Processes

1 code implementation14 Feb 2021 samuel cohen, Rendani Mbuvha, Tshilidzi Marwala, Marc Peter Deisenroth

Gaussian processes (GPs) are nonparametric Bayesian models that have been applied to regression and classification problems.

Gaussian Processes General Classification +2

Using Gaussian Processes to Design Dynamic Experiments for Black-Box Model Discrimination under Uncertainty

no code implementations7 Feb 2021 Simon Olofsson, Eduardo S. Schultz, Adel Mhamdi, Alexander Mitsos, Marc Peter Deisenroth, Ruth Misener

Typically, several rival mechanistic models can explain the available data, so design of dynamic experiments for model discrimination helps optimally collect additional data by finding experimental settings that maximise model prediction divergence.

Gaussian Processes

Cauchy-Schwarz Regularized Autoencoder

no code implementations6 Jan 2021 Linh Tran, Maja Pantic, Marc Peter Deisenroth

To perform efficient inference for GMM priors, we introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.

Clustering Density Estimation

Pathwise Conditioning of Gaussian Processes

2 code implementations8 Nov 2020 James T. Wilson, Viacheslav Borovitskiy, Alexander Terenin, Peter Mostowsky, Marc Peter Deisenroth

As Gaussian processes are used to answer increasingly complex questions, analytic solutions become scarcer and scarcer.

Gaussian Processes

Matérn Gaussian Processes on Graphs

no code implementations29 Oct 2020 Viacheslav Borovitskiy, Iskander Azangulov, Alexander Terenin, Peter Mostowsky, Marc Peter Deisenroth, Nicolas Durrande

Gaussian processes are a versatile framework for learning unknown functions in a manner that permits one to utilize prior information about their properties.

Gaussian Processes

A Foliated View of Transfer Learning

no code implementations2 Aug 2020 Janith Petangoda, Nick A. M. Monk, Marc Peter Deisenroth

Transfer learning considers a learning process where a new task is solved by transferring relevant knowledge from known solutions to related tasks.

Transfer Learning

Probabilistic Active Meta-Learning

1 code implementation NeurIPS 2020 Jean Kaddour, Steindór Sæmundsson, Marc Peter Deisenroth

However, this setting does not take into account the sequential nature that naturally arises when training a model from scratch in real-life: how do we collect a set of training tasks in a data-efficient manner?

Meta-Learning

Estimating Barycenters of Measures in High Dimensions

no code implementations14 Jul 2020 Samuel Cohen, Michael Arbel, Marc Peter Deisenroth

Barycentric averaging is a principled way of summarizing populations of measures.

Vocal Bursts Intensity Prediction

Stochastic Differential Equations with Variational Wishart Diffusions

1 code implementation ICML 2020 Martin Jørgensen, Marc Peter Deisenroth, Hugh Salimbeni

We present a Bayesian non-parametric way of inferring stochastic differential equations for both regression tasks and continuous-time dynamical modelling.

regression

Aligning Time Series on Incomparable Spaces

1 code implementation22 Jun 2020 Samuel Cohen, Giulia Luise, Alexander Terenin, Brandon Amos, Marc Peter Deisenroth

Dynamic time warping (DTW) is a useful method for aligning, comparing and combining time series, but it requires them to live in comparable spaces.

Dynamic Time Warping Imitation Learning +2

Matérn Gaussian processes on Riemannian manifolds

1 code implementation NeurIPS 2020 Viacheslav Borovitskiy, Alexander Terenin, Peter Mostowsky, Marc Peter Deisenroth

Gaussian processes are an effective model class for learning unknown functions, particularly in settings where accurately representing predictive uncertainty is of key importance.

Gaussian Processes

Efficiently Sampling Functions from Gaussian Process Posteriors

5 code implementations ICML 2020 James T. Wilson, Viacheslav Borovitskiy, Alexander Terenin, Peter Mostowsky, Marc Peter Deisenroth

Gaussian processes are the gold standard for many real-world modeling problems, especially in cases where a model's success hinges upon its ability to faithfully represent predictive uncertainty.

Gaussian Processes

Variational Integrator Networks for Physically Structured Embeddings

1 code implementation21 Oct 2019 Steindor Saemundsson, Alexander Terenin, Katja Hofmann, Marc Peter Deisenroth

Learning workable representations of dynamical systems is becoming an increasingly important problem in a number of application areas.

Differentially Private Empirical Risk Minimization with Sparsity-Inducing Norms

no code implementations13 May 2019 K. S. Sesh Kumar, Marc Peter Deisenroth

This is the first work that analyzes the dual optimization problems of risk minimization problems in the context of differential privacy.

GPdoemd: a Python package for design of experiments for model discrimination

1 code implementation5 Oct 2018 Simon Olofsson, Lukas Hebing, Sebastian Niedenführ, Marc Peter Deisenroth, Ruth Misener

Given rival mathematical models and an initial experimental data set, optimal design of experiments suggests maximally informative experimental observations that maximise a design criterion weighted by prediction uncertainty.

Maximizing acquisition functions for Bayesian optimization

1 code implementation NeurIPS 2018 James T. Wilson, Frank Hutter, Marc Peter Deisenroth

Bayesian optimization is a sample-efficient approach to global optimization that relies on theoretically motivated value heuristics (acquisition functions) to guide its search process.

Bayesian Optimization

Meta Reinforcement Learning with Latent Variable Gaussian Processes

no code implementations20 Mar 2018 Steindór Sæmundsson, Katja Hofmann, Marc Peter Deisenroth

Learning from small data sets is critical in many practical applications where data collection is time consuming or expensive, e. g., robotics, animal experiments or drug design.

Gaussian Processes Meta-Learning +5

Design of Experiments for Model Discrimination Hybridising Analytical and Data-Driven Approaches

no code implementations ICML 2018 Simon Olofsson, Marc Peter Deisenroth, Ruth Misener

Healthcare companies must submit pharmaceutical drugs or medical devices to regulatory bodies before marketing new technology.

Marketing

Hybed: Hyperbolic Neural Graph Embedding

no code implementations ICLR 2018 Benjamin Paul Chamberlain, James R. Clough, Marc Peter Deisenroth

Neural embeddings have been used with great success in Natural Language Processing (NLP) where they provide compact representations that encapsulate word similarity and attain state-of-the-art performance in a range of linguistic tasks.

Graph Embedding Word Similarity

The reparameterization trick for acquisition functions

1 code implementation1 Dec 2017 James T. Wilson, Riccardo Moriconi, Frank Hutter, Marc Peter Deisenroth

Bayesian optimization is a sample-efficient approach to solving global optimization problems.

Bayesian Optimization

A Brief Survey of Deep Reinforcement Learning

no code implementations19 Aug 2017 Kai Arulkumaran, Marc Peter Deisenroth, Miles Brundage, Anil Anthony Bharath

Deep reinforcement learning is poised to revolutionise the field of AI and represents a step towards building autonomous systems with a higher level understanding of the visual world.

reinforcement-learning Reinforcement Learning (RL)

Data-Efficient Reinforcement Learning with Probabilistic Model Predictive Control

1 code implementation20 Jun 2017 Sanket Kamthe, Marc Peter Deisenroth

Trial-and-error based reinforcement learning (RL) has seen rapid advancements in recent times, especially with the advent of deep neural networks.

Gaussian Processes Model Predictive Control +3

Identification of Gaussian Process State Space Models

no code implementations NeurIPS 2017 Stefanos Eleftheriadis, Thomas F. W. Nicholson, Marc Peter Deisenroth, James Hensman

To address this challenge, we impose a structured Gaussian variational posterior distribution over the latent states, which is parameterised by a recognition model in the form of a bi-directional recurrent neural network.

Neural Embeddings of Graphs in Hyperbolic Space

no code implementations29 May 2017 Benjamin Paul Chamberlain, James Clough, Marc Peter Deisenroth

Neural embeddings have been used with great success in Natural Language Processing (NLP).

Word Similarity

Customer Lifetime Value Prediction Using Embeddings

no code implementations7 Mar 2017 Benjamin Paul Chamberlain, Angelo Cardoso, C. H. Bryan Liu, Roberto Pagliari, Marc Peter Deisenroth

We detail the system deployed at ASOS and show that learning feature representations is a promising extension to the state of the art in CLTV modelling.

Marketing Value prediction

Accelerating the BSM interpretation of LHC data with machine learning

no code implementations8 Nov 2016 Gianfranco Bertone, Marc Peter Deisenroth, Jong Soo Kim, Sebastian Liem, Roberto Ruiz de Austri, Max Welling

The interpretation of Large Hadron Collider (LHC) data in the framework of Beyond the Standard Model (BSM) theories is hampered by the need to run computationally expensive event generators and detector simulators.

BIG-bench Machine Learning

Probabilistic Inference of Twitter Users' Age based on What They Follow

no code implementations18 Jan 2016 Benjamin Paul Chamberlain, Clive Humby, Marc Peter Deisenroth

Enhancing Twitter data with user ages would advance our ability to study social network structures, information flows and the spread of contagions.

Real-Time Community Detection in Large Social Networks on a Laptop

1 code implementation15 Jan 2016 Benjamin Paul Chamberlain, Josh Levy-Kramer, Clive Humby, Marc Peter Deisenroth

For a broad range of research, governmental and commercial applications it is important to understand the allegiances, communities and structure of key players in society.

Community Detection Distributed Computing

Bayesian Optimization with Dimension Scheduling: Application to Biological Systems

no code implementations17 Nov 2015 Doniyor Ulmasov, Caroline Baroukh, Benoit Chachuat, Marc Peter Deisenroth, Ruth Misener

But experiments may be less expensive than BO methods assume: In some simulation models, we may be able to conduct multiple thousands of experiments in a few hours, and the computational burden of BO is no longer negligible compared to experimentation time.

Bayesian Optimization Scheduling

Data-Efficient Learning of Feedback Policies from Image Pixels using Deep Dynamical Models

no code implementations8 Oct 2015 John-Alexander M. Assael, Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth

We consider a particularly important instance of this challenge, the pixels-to-torques problem, where an RL agent learns a closed-loop control policy ("torques") from pixel information only.

Model-based Reinforcement Learning Model Predictive Control +2

Gaussian Processes for Data-Efficient Learning in Robotics and Control

1 code implementation10 Feb 2015 Marc Peter Deisenroth, Dieter Fox, Carl Edward Rasmussen

Autonomous learning has been a promising direction in control and robotics for more than a decade since data-driven learning allows to reduce the amount of engineering knowledge, which is otherwise required.

Gaussian Processes Reinforcement Learning (RL)

Distributed Gaussian Processes

no code implementations10 Feb 2015 Marc Peter Deisenroth, Jun Wei Ng

To scale Gaussian processes (GPs) to large data sets we introduce the robust Bayesian Committee Machine (rBCM), a practical and scalable product-of-experts model for large-scale distributed GP regression.

Gaussian Processes regression

From Pixels to Torques: Policy Learning with Deep Dynamical Models

no code implementations8 Feb 2015 Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth

In this paper, we consider one instance of this challenge, the pixels to torques problem, where an agent must learn a closed-loop control policy from pixel information only.

Model-based Reinforcement Learning Model Predictive Control +2

Hierarchical Mixture-of-Experts Model for Large-Scale Gaussian Process Regression

no code implementations9 Dec 2014 Jun Wei Ng, Marc Peter Deisenroth

We propose a practical and scalable Gaussian process model for large-scale nonlinear probabilistic regression.

regression

Learning deep dynamical models from image pixels

no code implementations28 Oct 2014 Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth

In particular, we jointly learn a low-dimensional embedding of the observation by means of deep auto-encoders and a predictive transition model in this low-dimensional space.

Manifold Gaussian Processes for Regression

1 code implementation24 Feb 2014 Roberto Calandra, Jan Peters, Carl Edward Rasmussen, Marc Peter Deisenroth

This feature space is often learned in an unsupervised way, which might lead to data representations that are not useful for the overall regression task.

Gaussian Processes regression

Multi-Task Policy Search

no code implementations2 Jul 2013 Marc Peter Deisenroth, Peter Englert, Jan Peters, Dieter Fox

Learning policies that generalize across multiple tasks is an important and challenging research topic in reinforcement learning and robotics.

Imitation Learning reinforcement-learning +1

Expectation Propagation in Gaussian Process Dynamical Systems: Extended Version

no code implementations NeurIPS 2012 Marc Peter Deisenroth, Shakir Mohamed

Rich and complex time-series data, such as those generated from engineering systems, financial markets, videos or neural recordings, are now a common feature of modern data analysis.

Time Series Time Series Analysis

Robust Filtering and Smoothing with Gaussian Processes

no code implementations20 Mar 2012 Marc Peter Deisenroth, Ryan Turner, Marco F. Huber, Uwe D. Hanebeck, Carl Edward Rasmussen

We propose a principled algorithm for robust Bayesian filtering and smoothing in nonlinear stochastic dynamic systems when both the transition function and the measurement function are described by non-parametric Gaussian process (GP) models.

Gaussian Processes

A Probabilistic Perspective on Gaussian Filtering and Smoothing

1 code implementation10 Jun 2010 Marc Peter Deisenroth, Henrik Ohlsson

We present a general probabilistic perspective on Gaussian filtering and smoothing.

Cannot find the paper you are looking for? You can Submit a new open access paper.