Iteratively Reweighted $\ell_1$-Penalized Robust Regression

9 Jul 2019  ·  Xiaoou Pan, Qiang Sun, Wen-Xin Zhou ·

This paper investigates tradeoffs among optimization errors, statistical rates of convergence and the effect of heavy-tailed errors for high-dimensional robust regression with nonconvex regularization. When the additive errors in linear models have only bounded second moment, we show that iteratively reweighted $\ell_1$-penalized adaptive Huber regression estimator satisfies exponential deviation bounds and oracle properties, including the oracle convergence rate and variable selection consistency, under a weak beta-min condition. Computationally, we need as many as $O(\log s + \log\log d)$ iterations to reach such an oracle estimator, where $s$ and $d$ denote the sparsity and ambient dimension, respectively. Extension to a general class of robust loss functions is also considered. Numerical studies lend strong support to our methodology and theory.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here