no code implementations • 3 Jun 2023 • Arya Akhavan, Evgenii Chzhen, Massimiliano Pontil, Alexandre B. Tsybakov
The first algorithm uses a gradient estimator based on randomization over the $\ell_2$ sphere due to Bach and Perchet (2016).
no code implementations • 29 Nov 2022 • Arya Akhavan, Davit Gogolashvili, Alexandre B. Tsybakov
We propose a new method for estimating the minimizer $\boldsymbol{x}^*$ and the minimum value $f^*$ of a smooth and strongly convex regression function $f$ from the observations contaminated by random noise.
no code implementations • 27 Jun 2022 • Julien Chhor, Suzanne Sigalla, Alexandre B. Tsybakov
In the nonparametric regression setting, we construct an estimator which is a continuous function interpolating the data points with high probability, while attaining minimax optimal rates under mean squared risk on the scale of H\"older classes adaptively to the unknown smoothness.
no code implementations • 27 May 2022 • Arya Akhavan, Evgenii Chzhen, Massimiliano Pontil, Alexandre B. Tsybakov
We present a novel gradient estimator based on two function evaluations and randomization on the $\ell_1$-sphere.
no code implementations • NeurIPS 2021 • Arya Akhavan, Massimiliano Pontil, Alexandre B. Tsybakov
We study the problem of distributed zero-order optimization for a class of strongly convex functions.
Optimization and Control Statistics Theory Statistics Theory
no code implementations • NeurIPS 2020 • Arya Akhavan, Massimiliano Pontil, Alexandre B. Tsybakov
The gradient is estimated by a randomized procedure involving two function evaluations and a smoothing kernel.
2 code implementations • 25 May 2020 • Marianne Bléhaut, Xavier D'Haultfoeuille, Jérémy L'Hour, Alexandre B. Tsybakov
The synthetic control method is a an econometric tool to evaluate causal effects when only one unit is treated.
no code implementations • 25 Jun 2018 • Mikhail Belkin, Alexander Rakhlin, Alexandre B. Tsybakov
We show that learning methods interpolating the training data can achieve optimal rates for the problems of nonparametric regression and prediction with square loss.
no code implementations • 22 Dec 2014 • Alexandre Belloni, Mathieu Rosenbaum, Alexandre B. Tsybakov
Under the first assumption, the rates of convergence of the proposed estimators depend explicitly on $\bar \delta$, while the second assumption has been applied when an estimator for the second moment of the observational error is available.
no code implementations • 6 Aug 2013 • Alexander Rakhlin, Karthik Sridharan, Alexandre B. Tsybakov
Furthermore, for $p\in(0, 2)$, the excess risk rate matches the behavior of the minimax risk of function estimation in regression problems under the well-specified model.
no code implementations • 29 Nov 2010 • Vladimir Koltchinskii, Alexandre B. Tsybakov, Karim Lounici
We show that the obtained rates are optimal up to logarithmic factors in a minimax sense and also derive, for any fixed matrix $A_0$, a non-minimax lower bound on the rate of convergence of our estimator, which coincides with the upper bound up to a constant factor.
no code implementations • 15 Dec 2008 • Mathieu Rosenbaum, Alexandre B. Tsybakov
We consider the model {eqnarray*}y=X\theta^*+\xi, Z=X+\Xi,{eqnarray*} where the random vector $y\in\mathbb{R}^n$ and the random $n\times p$ matrix $Z$ are observed, the $n\times p$ matrix $X$ is unknown, $\Xi$ is an $n\times p$ random noise matrix, $\xi\in\mathbb{R}^n$ is a noise independent of $\Xi$, and $\theta^*$ is a vector of unknown parameters to be estimated.
Statistics Theory Statistics Theory