no code implementations • 22 Feb 2024 • Massil Hihat, Guillaume Garrigos, Adeline Fermanian, Simon Bussy
In this paper, we consider a deterministic online linear regression model where we allow the responses to be multivariate.
no code implementations • 5 Feb 2024 • Sobihan Surendran, Antoine Godichon-Baggioni, Adeline Fermanian, Sylvain Le Corff
This paper provides a comprehensive non-asymptotic analysis of SGD with biased gradients and adaptive steps for convex and non-convex smooth functions.
no code implementations • 30 Jan 2024 • Linus Bleistein, Van-Tuan Nguyen, Adeline Fermanian, Agathe Guilloux
We consider the task of learning individual-specific intensities of counting processes from a set of static variables and irregularly sampled time series.
no code implementations • 9 Feb 2023 • Adeline Fermanian, Terry Lyons, James Morrill, Cristopher Salvi
This article provides a concise overview of some of the recent advances in the application of rough path theory to machine learning.
1 code implementation • 27 Jan 2023 • Linus Bleistein, Adeline Fermanian, Anne-Sophie Jannot, Agathe Guilloux
We address the problem of learning the dynamics of an unknown non-parametric system linking a target and a feature time series.
1 code implementation • 14 Jun 2022 • Pierre Marion, Adeline Fermanian, Gérard Biau, Jean-Philippe Vert
initializations, the only non-trivial dynamics is for $\alpha_L = 1/\sqrt{L}$ (other choices lead either to explosion or to identity mapping).
1 code implementation • NeurIPS 2021 • Adeline Fermanian, Pierre Marion, Jean-Philippe Vert, Gérard Biau
Building on the interpretation of a recurrent neural network (RNN) as a continuous-time neural differential equation, we show, under appropriate conditions, that the solution of a RNN can be viewed as a linear function of a specific feature set of the input sequence, known as the signature.
1 code implementation • 15 Jun 2020 • Adeline Fermanian
We place ourselves in a functional regression setting and propose a novel methodology for regressing a real output on vector-valued functional covariates.
Methodology 62R10 (Primary), 60L10 (Secondary)
1 code implementation • 1 Jun 2020 • James Morrill, Adeline Fermanian, Patrick Kidger, Terry Lyons
There is a great deal of flexibility as to how this method can be applied.
1 code implementation • 29 Nov 2019 • Adeline Fermanian
It is shown that a specific embedding, called lead-lag, is systematically the strongest performer across all datasets and algorithms considered.