Error bounds for sparse classifiers in high-dimensions

7 Oct 2018  ·  Antoine Dedieu ·

We prove an L2 recovery bound for a family of sparse estimators defined as minimizers of some empirical loss functions -- which include hinge loss and logistic loss. More precisely, we achieve an upper-bound for coefficients estimation scaling as (k*/n)\log(p/k*): n,p is the size of the design matrix and k* the dimension of the theoretical loss minimizer. This is done under standard assumptions, for which we derive stronger versions of a cone condition and a restricted strong convexity. Our bound holds with high probability and in expectation and applies to an L1-regularized estimator and to a recently introduced Slope estimator, which we generalize for classification problems. Slope presents the advantage of adapting to unknown sparsity. Thus, we propose a tractable proximal algorithm to compute it and assess its empirical performance. Our results match the best existing bounds for classification and regression problems.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Statistics Theory Statistics Theory