no code implementations • 26 Feb 2024 • Rafael Anderka, Marc Peter Deisenroth, So Takao
Data assimilation (DA) methods use priors arising from differential equations to robustly interpolate and extrapolate data.
no code implementations • 10 Nov 2023 • Vignesh Gopakumar, Stanislas Pamela, Lorenzo Zanisi, Zongyi Li, Ander Gray, Daniel Brennand, Nitesh Bhatia, Gregory Stathopoulos, Matt Kusner, Marc Peter Deisenroth, Anima Anandkumar, JOREK Team, MAST Team
Predicting plasma evolution within a Tokamak reactor is crucial to realizing the goal of sustainable fusion.
no code implementations • 2 Nov 2023 • Mathieu Alain, So Takao, Brooks Paige, Marc Peter Deisenroth
In this paper, we go beyond this dyadic setting and consider polyadic relations that include interactions between vertices, edges and one of their generalisations, known as cells.
1 code implementation • 2 Sep 2023 • Lucas Cosier, Rares Iordan, Sicelukwanda Zwane, Giovanni Franzese, James T. Wilson, Marc Peter Deisenroth, Alexander Terenin, Yasemin Bekiroglu
To control how a robot moves, motion planning algorithms must compute paths in high-dimensional state spaces while accounting for physical constraints related to motors and joints, generating smooth and stable motions, avoiding obstacles, and preventing collisions.
1 code implementation • 21 Aug 2023 • Alexander Norcliffe, Marc Peter Deisenroth
In this paper, we propose an alternative way to speed up the training of neural ODEs.
no code implementations • 15 Aug 2023 • Ahmet Tekden, Marc Peter Deisenroth, Yasemin Bekiroglu
This work addresses the problem of transferring a grasp experience or a demonstration to a novel object that shares shape similarities with objects the robot has previously encountered.
no code implementations • 20 Jul 2023 • Ilana Sebag, samuel cohen, Marc Peter Deisenroth
One of the key approaches to IL is to define a distance between agent and expert and to find an agent policy that minimizes that distance.
no code implementations • 11 Jul 2023 • Mihaela Rosca, Marc Peter Deisenroth
In this work, we provide a novel approach to use BEA, and show how our approach can be used to construct continuous-time flows with vector fields that can be written as gradients.
no code implementations • 9 Jul 2023 • Rares Iordan, Marc Peter Deisenroth, Mihaela Rosca
Recent progress has been made in understanding optimisation dynamics in neural networks trained with full-batch gradient descent with momentum with the uncovering of the edge of stability phenomenon in supervised learning.
1 code implementation • 11 Apr 2023 • Harry Jake Cunningham, Daniel Augusto de Souza, So Takao, Mark van der Wilk, Marc Peter Deisenroth
For large datasets, sparse GPs reduce these demands by conditioning on a small set of inducing variables designed to summarise the data.
no code implementations • 30 Mar 2023 • Yicheng Luo, Jackie Kay, Edward Grefenstette, Marc Peter Deisenroth
While offline RL algorithms can in principle be used for finetuning, in practice, their online performance improves slowly.
no code implementations • 29 Mar 2023 • Organizers Of QueerInAI, :, Anaelia Ovalle, Arjun Subramonian, Ashwin Singh, Claas Voelcker, Danica J. Sutherland, Davide Locatelli, Eva Breznik, Filip Klubička, Hang Yuan, Hetvi J, huan zhang, Jaidev Shriram, Kruno Lehman, Luca Soldaini, Maarten Sap, Marc Peter Deisenroth, Maria Leonor Pacheco, Maria Ryskina, Martin Mundt, Milind Agarwal, Nyx McLean, Pan Xu, A Pranav, Raj Korpan, Ruchira Ray, Sarah Mathew, Sarthak Arora, ST John, Tanvi Anand, Vishakha Agrawal, William Agnew, Yanan Long, Zijie J. Wang, Zeerak Talat, Avijit Ghosh, Nathaniel Dennler, Michael Noseworthy, Sharvani Jha, Emi Baylor, Aditya Joshi, Natalia Y. Bilenko, Andrew McNamara, Raphael Gontijo-Lopes, Alex Markham, Evyn Dǒng, Jackie Kay, Manu Saraswat, Nikhil Vytla, Luke Stark
We present Queer in AI as a case study for community-led participatory design in AI.
1 code implementation • 24 Mar 2023 • Yicheng Luo, Zhengyao Jiang, samuel cohen, Edward Grefenstette, Marc Peter Deisenroth
In this paper, we introduce Optimal Transport Reward labeling (OTR), an algorithm that assigns rewards to offline trajectories, with a few high-quality demonstrations.
no code implementations • 1 Feb 2023 • Sean Nassimiha, Peter Dudfield, Jack Kelly, Marc Peter Deisenroth, So Takao
Short-term forecasting of solar photovoltaic energy (PV) production is important for powerplant management.
1 code implementation • 15 Sep 2022 • Denis Hadjivelichkov, Sicelukwanda Zwane, Marc Peter Deisenroth, Lourdes Agapito, Dimitrios Kanoulas
In this work, we tackle one-shot visual search of object parts.
no code implementations • NeurIPS 2021 • Michael Hutchinson, Alexander Terenin, Viacheslav Borovitskiy, So Takao, Yee Whye Teh, Marc Peter Deisenroth
Gaussian processes are machine learning models capable of learning unknown functions in a way that represents uncertainty, thereby facilitating construction of optimal decision-making systems.
no code implementations • 22 Oct 2021 • Vu Nguyen, Marc Peter Deisenroth, Michael A. Osborne
More specifically, we propose the first use of such bounds to improve Gaussian process (GP) posterior sampling and Bayesian optimization (BO).
no code implementations • 29 Sep 2021 • samuel cohen, Brandon Amos, Marc Peter Deisenroth, Mikael Henaff, Eugene Vinitsky, Denis Yarats
In this setting, we explore recipes for imitation learning based on adversarial learning and optimal transport.
no code implementations • 22 Jul 2021 • Janith Petangoda, Marc Peter Deisenroth, Nicholas A. M. Monk
Learning to transfer considers learning solutions to tasks in a such way that relevant knowledge can be transferred from known task solutions to new, related tasks.
1 code implementation • 26 May 2021 • Michelangelo Conserva, Marc Peter Deisenroth, K S Sesh Kumar
Many algorithms for ranked data become computationally intractable as the number of objects grows due to the complex geometric structure induced by rankings.
1 code implementation • 22 Feb 2021 • Andreas Hochlehnert, Alexander Terenin, Steindór Sæmundsson, Marc Peter Deisenroth
Learning physically structured representations of dynamical systems that include contact between different objects is an important problem for learning-based approaches in robotics.
1 code implementation • 14 Feb 2021 • samuel cohen, Rendani Mbuvha, Tshilidzi Marwala, Marc Peter Deisenroth
Gaussian processes (GPs) are nonparametric Bayesian models that have been applied to regression and classification problems.
no code implementations • 14 Feb 2021 • samuel cohen, Alexander Terenin, Yannik Pitcan, Brandon Amos, Marc Peter Deisenroth, K S Sesh Kumar
To construct this distance, we introduce a characterization of the one-dimensional multi-marginal Kantorovich problem and use it to highlight a number of properties of the sliced multi-marginal Wasserstein distance.
no code implementations • 7 Feb 2021 • Simon Olofsson, Eduardo S. Schultz, Adel Mhamdi, Alexander Mitsos, Marc Peter Deisenroth, Ruth Misener
Typically, several rival mechanistic models can explain the available data, so design of dynamic experiments for model discrimination helps optimally collect additional data by finding experimental settings that maximise model prediction divergence.
no code implementations • 6 Jan 2021 • Linh Tran, Maja Pantic, Marc Peter Deisenroth
To perform efficient inference for GMM priors, we introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
1 code implementation • 14 Nov 2020 • Daniel Lengyel, Janith Petangoda, Isak Falk, Kate Highnam, Michalis Lazarou, Arinbjörn Kolbeinsson, Marc Peter Deisenroth, Nicholas R. Jennings
We propose an efficient algorithm to visualise symmetries in neural networks.
2 code implementations • 8 Nov 2020 • James T. Wilson, Viacheslav Borovitskiy, Alexander Terenin, Peter Mostowsky, Marc Peter Deisenroth
As Gaussian processes are used to answer increasingly complex questions, analytic solutions become scarcer and scarcer.
no code implementations • 29 Oct 2020 • Viacheslav Borovitskiy, Iskander Azangulov, Alexander Terenin, Peter Mostowsky, Marc Peter Deisenroth, Nicolas Durrande
Gaussian processes are a versatile framework for learning unknown functions in a manner that permits one to utilize prior information about their properties.
no code implementations • 2 Aug 2020 • Janith Petangoda, Nick A. M. Monk, Marc Peter Deisenroth
Transfer learning considers a learning process where a new task is solved by transferring relevant knowledge from known solutions to related tasks.
1 code implementation • NeurIPS 2020 • Jean Kaddour, Steindór Sæmundsson, Marc Peter Deisenroth
However, this setting does not take into account the sequential nature that naturally arises when training a model from scratch in real-life: how do we collect a set of training tasks in a data-efficient manner?
no code implementations • 14 Jul 2020 • Samuel Cohen, Michael Arbel, Marc Peter Deisenroth
Barycentric averaging is a principled way of summarizing populations of measures.
1 code implementation • ICML 2020 • Martin Jørgensen, Marc Peter Deisenroth, Hugh Salimbeni
We present a Bayesian non-parametric way of inferring stochastic differential equations for both regression tasks and continuous-time dynamical modelling.
1 code implementation • 22 Jun 2020 • Samuel Cohen, Giulia Luise, Alexander Terenin, Brandon Amos, Marc Peter Deisenroth
Dynamic time warping (DTW) is a useful method for aligning, comparing and combining time series, but it requires them to live in comparable spaces.
1 code implementation • NeurIPS 2020 • Viacheslav Borovitskiy, Alexander Terenin, Peter Mostowsky, Marc Peter Deisenroth
Gaussian processes are an effective model class for learning unknown functions, particularly in settings where accurately representing predictive uncertainty is of key importance.
5 code implementations • ICML 2020 • James T. Wilson, Viacheslav Borovitskiy, Alexander Terenin, Peter Mostowsky, Marc Peter Deisenroth
Gaussian processes are the gold standard for many real-world modeling problems, especially in cases where a model's success hinges upon its ability to faithfully represent predictive uncertainty.
1 code implementation • 21 Oct 2019 • Steindor Saemundsson, Alexander Terenin, Katja Hofmann, Marc Peter Deisenroth
Learning workable representations of dynamical systems is becoming an increasingly important problem in a number of application areas.
1 code implementation • 14 May 2019 • Hugh Salimbeni, Vincent Dutordoir, James Hensman, Marc Peter Deisenroth
Deep Gaussian processes (DGPs) can model complex marginal densities as well as complex mappings.
no code implementations • 13 May 2019 • K. S. Sesh Kumar, Marc Peter Deisenroth
This is the first work that analyzes the dual optimization problems of risk minimization problems in the context of differential privacy.
1 code implementation • 5 Oct 2018 • Simon Olofsson, Lukas Hebing, Sebastian Niedenführ, Marc Peter Deisenroth, Ruth Misener
Given rival mathematical models and an initial experimental data set, optimal design of experiments suggests maximally informative experimental observations that maximise a design criterion weighted by prediction uncertainty.
1 code implementation • NeurIPS 2018 • James T. Wilson, Frank Hutter, Marc Peter Deisenroth
Bayesian optimization is a sample-efficient approach to global optimization that relies on theoretically motivated value heuristics (acquisition functions) to guide its search process.
no code implementations • 20 Mar 2018 • Steindór Sæmundsson, Katja Hofmann, Marc Peter Deisenroth
Learning from small data sets is critical in many practical applications where data collection is time consuming or expensive, e. g., robotics, animal experiments or drug design.
no code implementations • ICML 2018 • Simon Olofsson, Marc Peter Deisenroth, Ruth Misener
Healthcare companies must submit pharmaceutical drugs or medical devices to regulatory bodies before marketing new technology.
no code implementations • ICLR 2018 • Benjamin Paul Chamberlain, James R. Clough, Marc Peter Deisenroth
Neural embeddings have been used with great success in Natural Language Processing (NLP) where they provide compact representations that encapsulate word similarity and attain state-of-the-art performance in a range of linguistic tasks.
1 code implementation • 1 Dec 2017 • James T. Wilson, Riccardo Moriconi, Frank Hutter, Marc Peter Deisenroth
Bayesian optimization is a sample-efficient approach to solving global optimization problems.
no code implementations • 19 Aug 2017 • Kai Arulkumaran, Marc Peter Deisenroth, Miles Brundage, Anil Anthony Bharath
Deep reinforcement learning is poised to revolutionise the field of AI and represents a step towards building autonomous systems with a higher level understanding of the visual world.
1 code implementation • 20 Jun 2017 • Sanket Kamthe, Marc Peter Deisenroth
Trial-and-error based reinforcement learning (RL) has seen rapid advancements in recent times, especially with the advent of deep neural networks.
no code implementations • NeurIPS 2017 • Stefanos Eleftheriadis, Thomas F. W. Nicholson, Marc Peter Deisenroth, James Hensman
To address this challenge, we impose a structured Gaussian variational posterior distribution over the latent states, which is parameterised by a recognition model in the form of a bi-directional recurrent neural network.
no code implementations • 29 May 2017 • Benjamin Paul Chamberlain, James Clough, Marc Peter Deisenroth
Neural embeddings have been used with great success in Natural Language Processing (NLP).
no code implementations • 7 Mar 2017 • Benjamin Paul Chamberlain, Angelo Cardoso, C. H. Bryan Liu, Roberto Pagliari, Marc Peter Deisenroth
We detail the system deployed at ASOS and show that learning feature representations is a promising extension to the state of the art in CLTV modelling.
no code implementations • 8 Nov 2016 • Gianfranco Bertone, Marc Peter Deisenroth, Jong Soo Kim, Sebastian Liem, Roberto Ruiz de Austri, Max Welling
The interpretation of Large Hadron Collider (LHC) data in the framework of Beyond the Standard Model (BSM) theories is hampered by the need to run computationally expensive event generators and detector simulators.
no code implementations • 18 Jan 2016 • Benjamin Paul Chamberlain, Clive Humby, Marc Peter Deisenroth
Enhancing Twitter data with user ages would advance our ability to study social network structures, information flows and the spread of contagions.
1 code implementation • 15 Jan 2016 • Benjamin Paul Chamberlain, Josh Levy-Kramer, Clive Humby, Marc Peter Deisenroth
For a broad range of research, governmental and commercial applications it is important to understand the allegiances, communities and structure of key players in society.
no code implementations • 17 Nov 2015 • Doniyor Ulmasov, Caroline Baroukh, Benoit Chachuat, Marc Peter Deisenroth, Ruth Misener
But experiments may be less expensive than BO methods assume: In some simulation models, we may be able to conduct multiple thousands of experiments in a few hours, and the computational burden of BO is no longer negligible compared to experimentation time.
no code implementations • 8 Oct 2015 • John-Alexander M. Assael, Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth
We consider a particularly important instance of this challenge, the pixels-to-torques problem, where an RL agent learns a closed-loop control policy ("torques") from pixel information only.
Model-based Reinforcement Learning Model Predictive Control +2
1 code implementation • 10 Feb 2015 • Marc Peter Deisenroth, Dieter Fox, Carl Edward Rasmussen
Autonomous learning has been a promising direction in control and robotics for more than a decade since data-driven learning allows to reduce the amount of engineering knowledge, which is otherwise required.
no code implementations • 10 Feb 2015 • Marc Peter Deisenroth, Jun Wei Ng
To scale Gaussian processes (GPs) to large data sets we introduce the robust Bayesian Committee Machine (rBCM), a practical and scalable product-of-experts model for large-scale distributed GP regression.
no code implementations • 8 Feb 2015 • Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth
In this paper, we consider one instance of this challenge, the pixels to torques problem, where an agent must learn a closed-loop control policy from pixel information only.
Model-based Reinforcement Learning Model Predictive Control +2
no code implementations • 9 Dec 2014 • Jun Wei Ng, Marc Peter Deisenroth
We propose a practical and scalable Gaussian process model for large-scale nonlinear probabilistic regression.
no code implementations • 28 Oct 2014 • Niklas Wahlström, Thomas B. Schön, Marc Peter Deisenroth
In particular, we jointly learn a low-dimensional embedding of the observation by means of deep auto-encoders and a predictive transition model in this low-dimensional space.
1 code implementation • 24 Feb 2014 • Roberto Calandra, Jan Peters, Carl Edward Rasmussen, Marc Peter Deisenroth
This feature space is often learned in an unsupervised way, which might lead to data representations that are not useful for the overall regression task.
no code implementations • 2 Jul 2013 • Marc Peter Deisenroth, Peter Englert, Jan Peters, Dieter Fox
Learning policies that generalize across multiple tasks is an important and challenging research topic in reinforcement learning and robotics.
no code implementations • NeurIPS 2012 • Marc Peter Deisenroth, Shakir Mohamed
Rich and complex time-series data, such as those generated from engineering systems, financial markets, videos or neural recordings, are now a common feature of modern data analysis.
no code implementations • 20 Mar 2012 • Marc Peter Deisenroth, Ryan Turner, Marco F. Huber, Uwe D. Hanebeck, Carl Edward Rasmussen
We propose a principled algorithm for robust Bayesian filtering and smoothing in nonlinear stochastic dynamic systems when both the transition function and the measurement function are described by non-parametric Gaussian process (GP) models.
1 code implementation • 10 Jun 2010 • Marc Peter Deisenroth, Henrik Ohlsson
We present a general probabilistic perspective on Gaussian filtering and smoothing.