no code implementations • 9 Feb 2023 • Adeline Fermanian, Terry Lyons, James Morrill, Cristopher Salvi
This article provides a concise overview of some of the recent advances in the application of rough path theory to machine learning.
2 code implementations • 21 Jun 2021 • James Morrill, Patrick Kidger, Lingyi Yang, Terry Lyons
This is fine when the whole time series is observed in advance, but means that Neural CDEs are not suitable for use in \textit{online prediction tasks}, where predictions need to be made in real-time: a major use case for recurrent networks.
no code implementations • 28 Sep 2020 • James Morrill, Patrick Kidger, Cristopher Salvi, James Foster, Terry Lyons
Neural Controlled Differential Equations (Neural CDEs) are the continuous-time analogue of an RNN, just as Neural ODEs are analogous to ResNets.
3 code implementations • 17 Sep 2020 • James Morrill, Cristopher Salvi, Patrick Kidger, James Foster, Terry Lyons
Neural controlled differential equations (CDEs) are the continuous-time analogue of recurrent neural networks, as Neural ODEs are to residual networks, and offer a memory-efficient continuous-time way to model functions of potentially irregular time series.
Ranked #4 on
Time Series Classification
on EigenWorms
1 code implementation • 1 Jun 2020 • James Morrill, Adeline Fermanian, Patrick Kidger, Terry Lyons
There is a great deal of flexibility as to how this method can be applied.
2 code implementations • 28 May 2020 • Patrick Kidger, James Morrill, Terry Lyons
The shapelet transform is a form of feature extraction for time series, in which a time series is described by its similarity to each of a collection of `shapelets'.
5 code implementations • NeurIPS 2020 • Patrick Kidger, James Morrill, James Foster, Terry Lyons
The resulting \emph{neural controlled differential equation} model is directly applicable to the general setting of partially-observed irregularly-sampled multivariate time series, and (unlike previous work on this problem) it may utilise memory-efficient adjoint-based backpropagation even across observations.