Search Results for author: Alexandre B. Tsybakov

Found 12 papers, 1 papers with code

Gradient-free optimization of highly smooth functions: improved analysis and a new algorithm

no code implementations3 Jun 2023 Arya Akhavan, Evgenii Chzhen, Massimiliano Pontil, Alexandre B. Tsybakov

The first algorithm uses a gradient estimator based on randomization over the $\ell_2$ sphere due to Bach and Perchet (2016).

Estimating the minimizer and the minimum value of a regression function under passive design

no code implementations29 Nov 2022 Arya Akhavan, Davit Gogolashvili, Alexandre B. Tsybakov

We propose a new method for estimating the minimizer $\boldsymbol{x}^*$ and the minimum value $f^*$ of a smooth and strongly convex regression function $f$ from the observations contaminated by random noise.

regression

Benign overfitting and adaptive nonparametric regression

no code implementations27 Jun 2022 Julien Chhor, Suzanne Sigalla, Alexandre B. Tsybakov

In the nonparametric regression setting, we construct an estimator which is a continuous function interpolating the data points with high probability, while attaining minimax optimal rates under mean squared risk on the scale of H\"older classes adaptively to the unknown smoothness.

regression

A gradient estimator via L1-randomization for online zero-order optimization with two point feedback

no code implementations27 May 2022 Arya Akhavan, Evgenii Chzhen, Massimiliano Pontil, Alexandre B. Tsybakov

We present a novel gradient estimator based on two function evaluations and randomization on the $\ell_1$-sphere.

Distributed Zero-Order Optimization under Adversarial Noise

no code implementations NeurIPS 2021 Arya Akhavan, Massimiliano Pontil, Alexandre B. Tsybakov

We study the problem of distributed zero-order optimization for a class of strongly convex functions.

Optimization and Control Statistics Theory Statistics Theory

An alternative to synthetic control for models with many covariates under sparsity

2 code implementations25 May 2020 Marianne Bléhaut, Xavier D'Haultfoeuille, Jérémy L'Hour, Alexandre B. Tsybakov

The synthetic control method is a an econometric tool to evaluate causal effects when only one unit is treated.

Does data interpolation contradict statistical optimality?

no code implementations25 Jun 2018 Mikhail Belkin, Alexander Rakhlin, Alexandre B. Tsybakov

We show that learning methods interpolating the training data can achieve optimal rates for the problems of nonparametric regression and prediction with square loss.

regression

An $\{l_1,l_2,l_{\infty}\}$-Regularization Approach to High-Dimensional Errors-in-variables Models

no code implementations22 Dec 2014 Alexandre Belloni, Mathieu Rosenbaum, Alexandre B. Tsybakov

Under the first assumption, the rates of convergence of the proposed estimators depend explicitly on $\bar \delta$, while the second assumption has been applied when an estimator for the second moment of the observational error is available.

regression

Empirical entropy, minimax regret and minimax risk

no code implementations6 Aug 2013 Alexander Rakhlin, Karthik Sridharan, Alexandre B. Tsybakov

Furthermore, for $p\in(0, 2)$, the excess risk rate matches the behavior of the minimax risk of function estimation in regression problems under the well-specified model.

Math regression

Nuclear norm penalization and optimal rates for noisy low rank matrix completion

no code implementations29 Nov 2010 Vladimir Koltchinskii, Alexandre B. Tsybakov, Karim Lounici

We show that the obtained rates are optimal up to logarithmic factors in a minimax sense and also derive, for any fixed matrix $A_0$, a non-minimax lower bound on the rate of convergence of our estimator, which coincides with the upper bound up to a constant factor.

Low-Rank Matrix Completion regression +1

Sparse recovery under matrix uncertainty

no code implementations15 Dec 2008 Mathieu Rosenbaum, Alexandre B. Tsybakov

We consider the model {eqnarray*}y=X\theta^*+\xi, Z=X+\Xi,{eqnarray*} where the random vector $y\in\mathbb{R}^n$ and the random $n\times p$ matrix $Z$ are observed, the $n\times p$ matrix $X$ is unknown, $\Xi$ is an $n\times p$ random noise matrix, $\xi\in\mathbb{R}^n$ is a noise independent of $\Xi$, and $\theta^*$ is a vector of unknown parameters to be estimated.

Statistics Theory Statistics Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.