Search Results for author: Maxime Sangnier

Found 6 papers, 1 papers with code

Maximum Likelihood Estimation for Hawkes Processes with self-excitation or inhibition

1 code implementation9 Mar 2021 Anna Bonnet, Miguel Martinez Herrera, Maxime Sangnier

In this paper, we present a maximum likelihood method for estimating the parameters of a univariate Hawkes process with self-excitation or inhibition.

Approximating Lipschitz continuous functions with GroupSort neural networks

no code implementations9 Jun 2020 Ugo Tanielian, Maxime Sangnier, Gerard Biau

Recent advances in adversarial attacks and Wasserstein GANs have advocated for use of neural networks with restricted Lipschitz constants.

Some Theoretical Insights into Wasserstein GANs

no code implementations4 Jun 2020 Gérard Biau, Maxime Sangnier, Ugo Tanielian

Generative Adversarial Networks (GANs) have been successful in producing outstanding results in areas as diverse as image, video, and text generation.

Text Generation

Proximal boosting: aggregating weak learners to minimize non-differentiable losses

no code implementations29 Aug 2018 Erwan Fouillen, Claire Boyer, Maxime Sangnier

Gradient boosting is a prediction method that iteratively combines weak learners to produce a complex and accurate model.

Infinite-Task Learning with RKHSs

no code implementations22 May 2018 Romain Brault, Alex Lambert, Zoltán Szabó, Maxime Sangnier, Florence d'Alché-Buc

A step further consists of learning a continuum of tasks for various loss functions.

Multi-Task Learning

Joint quantile regression in vector-valued RKHSs

no code implementations NeurIPS 2016 Maxime Sangnier, Olivier Fercoq, Florence d'Alché-Buc

Addressing the will to give a more complete picture than an average relationship provided by standard regression, a novel framework for estimating and predicting simultaneously several conditional quantiles is introduced.

Multi-Task Learning regression

Cannot find the paper you are looking for? You can Submit a new open access paper.