Search Results for author: Arya Akhavan

Found 6 papers, 1 papers with code

Gradient-free optimization of highly smooth functions: improved analysis and a new algorithm

no code implementations3 Jun 2023 Arya Akhavan, Evgenii Chzhen, Massimiliano Pontil, Alexandre B. Tsybakov

The first algorithm uses a gradient estimator based on randomization over the $\ell_2$ sphere due to Bach and Perchet (2016).

Estimating the minimizer and the minimum value of a regression function under passive design

no code implementations29 Nov 2022 Arya Akhavan, Davit Gogolashvili, Alexandre B. Tsybakov

We propose a new method for estimating the minimizer $\boldsymbol{x}^*$ and the minimum value $f^*$ of a smooth and strongly convex regression function $f$ from the observations contaminated by random noise.

regression

Group Meritocratic Fairness in Linear Contextual Bandits

1 code implementation7 Jun 2022 Riccardo Grazzi, Arya Akhavan, John Isak Texas Falk, Leonardo Cella, Massimiliano Pontil

This is a very strong notion of fairness, since the relative rank is not directly observed by the agent and depends on the underlying reward model and on the distribution of rewards.

Fairness Multi-Armed Bandits

A gradient estimator via L1-randomization for online zero-order optimization with two point feedback

no code implementations27 May 2022 Arya Akhavan, Evgenii Chzhen, Massimiliano Pontil, Alexandre B. Tsybakov

We present a novel gradient estimator based on two function evaluations and randomization on the $\ell_1$-sphere.

Distributed Zero-Order Optimization under Adversarial Noise

no code implementations NeurIPS 2021 Arya Akhavan, Massimiliano Pontil, Alexandre B. Tsybakov

We study the problem of distributed zero-order optimization for a class of strongly convex functions.

Optimization and Control Statistics Theory Statistics Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.