no code implementations • 3 Jun 2023 • Arya Akhavan, Evgenii Chzhen, Massimiliano Pontil, Alexandre B. Tsybakov
The first algorithm uses a gradient estimator based on randomization over the $\ell_2$ sphere due to Bach and Perchet (2016).
no code implementations • 29 Nov 2022 • Arya Akhavan, Davit Gogolashvili, Alexandre B. Tsybakov
We propose a new method for estimating the minimizer $\boldsymbol{x}^*$ and the minimum value $f^*$ of a smooth and strongly convex regression function $f$ from the observations contaminated by random noise.
1 code implementation • 7 Jun 2022 • Riccardo Grazzi, Arya Akhavan, John Isak Texas Falk, Leonardo Cella, Massimiliano Pontil
This is a very strong notion of fairness, since the relative rank is not directly observed by the agent and depends on the underlying reward model and on the distribution of rewards.
no code implementations • 27 May 2022 • Arya Akhavan, Evgenii Chzhen, Massimiliano Pontil, Alexandre B. Tsybakov
We present a novel gradient estimator based on two function evaluations and randomization on the $\ell_1$-sphere.
no code implementations • NeurIPS 2021 • Arya Akhavan, Massimiliano Pontil, Alexandre B. Tsybakov
We study the problem of distributed zero-order optimization for a class of strongly convex functions.
Optimization and Control Statistics Theory Statistics Theory
no code implementations • NeurIPS 2020 • Arya Akhavan, Massimiliano Pontil, Alexandre B. Tsybakov
The gradient is estimated by a randomized procedure involving two function evaluations and a smoothing kernel.