1 code implementation • 9 Mar 2021 • Anna Bonnet, Miguel Martinez Herrera, Maxime Sangnier
In this paper, we present a maximum likelihood method for estimating the parameters of a univariate Hawkes process with self-excitation or inhibition.
no code implementations • 9 Jun 2020 • Ugo Tanielian, Maxime Sangnier, Gerard Biau
Recent advances in adversarial attacks and Wasserstein GANs have advocated for use of neural networks with restricted Lipschitz constants.
no code implementations • 4 Jun 2020 • Gérard Biau, Maxime Sangnier, Ugo Tanielian
Generative Adversarial Networks (GANs) have been successful in producing outstanding results in areas as diverse as image, video, and text generation.
no code implementations • 29 Aug 2018 • Erwan Fouillen, Claire Boyer, Maxime Sangnier
Gradient boosting is a prediction method that iteratively combines weak learners to produce a complex and accurate model.
no code implementations • 22 May 2018 • Romain Brault, Alex Lambert, Zoltán Szabó, Maxime Sangnier, Florence d'Alché-Buc
A step further consists of learning a continuum of tasks for various loss functions.
no code implementations • NeurIPS 2016 • Maxime Sangnier, Olivier Fercoq, Florence d'Alché-Buc
Addressing the will to give a more complete picture than an average relationship provided by standard regression, a novel framework for estimating and predicting simultaneously several conditional quantiles is introduced.