Search Results for author: Ulysse Marteau-Ferey

Found 6 papers, 3 papers with code

SRATTA : Sample Re-ATTribution Attack of Secure Aggregation in Federated Learning

1 code implementation13 Jun 2023 Tanguy Marchand, Régis Loeb, Ulysse Marteau-Ferey, Jean Ogier du Terrail, Arthur Pignet

We consider a cross-silo federated learning (FL) setting where a machine learning model with a fully connected first layer is trained between different clients and a central server using FedAvg, and where the aggregation step can be performed with secure aggregation (SA).

Federated Learning

Sampling from Arbitrary Functions via PSD Models

no code implementations20 Oct 2021 Ulysse Marteau-Ferey, Francis Bach, Alessandro Rudi

In many areas of applied statistics and machine learning, generating an arbitrary number of independent and identically distributed (i. i. d.)

Finding Global Minima via Kernel Approximations

no code implementations22 Dec 2020 Alessandro Rudi, Ulysse Marteau-Ferey, Francis Bach

We consider the global minimization of smooth functions based solely on function evaluations.

Non-parametric Models for Non-negative Functions

1 code implementation NeurIPS 2020 Ulysse Marteau-Ferey, Francis Bach, Alessandro Rudi

The paper is complemented by an experimental evaluation of the model showing its effectiveness in terms of formulation, algorithmic derivation and practical results on the problems of density estimation, regression with heteroscedastic errors, and multiple quantile regression.

Density Estimation quantile regression

Globally Convergent Newton Methods for Ill-conditioned Generalized Self-concordant Losses

2 code implementations NeurIPS 2019 Ulysse Marteau-Ferey, Francis Bach, Alessandro Rudi

In this paper, we study large-scale convex optimization algorithms based on the Newton method applied to regularized generalized self-concordant losses, which include logistic regression and softmax regression.

Generalization Bounds regression

Beyond Least-Squares: Fast Rates for Regularized Empirical Risk Minimization through Self-Concordance

no code implementations8 Feb 2019 Ulysse Marteau-Ferey, Dmitrii Ostrovskii, Francis Bach, Alessandro Rudi

We consider learning methods based on the regularization of a convex empirical risk by a squared Hilbertian norm, a setting that includes linear predictors and non-linear predictors through positive-definite kernels.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.