Understanding Best Subset Selection: A Tale of Two C(omplex)ities

16 Jan 2023  ·  Saptarshi Roy, Ambuj Tewari, Ziwei Zhu ·

For decades, best subset selection (BSS) has eluded statisticians mainly due to its computational bottleneck. However, until recently, modern computational breakthroughs have rekindled theoretical interest in BSS and have led to new findings. Recently, \cite{guo2020best} showed that the model selection performance of BSS is governed by a margin quantity that is robust to the design dependence, unlike modern methods such as LASSO, SCAD, MCP, etc. Motivated by their theoretical results, in this paper, we also study the variable selection properties of best subset selection for high-dimensional sparse linear regression setup. We show that apart from the identifiability margin, the following two complexity measures play a fundamental role in characterizing the margin condition for model consistency: (a) complexity of \emph{residualized features}, (b) complexity of \emph{spurious projections}. In particular, we establish a simple margin condition that depends only on the identifiability margin and the dominating one of the two complexity measures. Furthermore, we show that a margin condition depending on similar margin quantity and complexity measures is also necessary for model consistency of BSS. For a broader understanding, we also consider some simple illustrative examples to demonstrate the variation in the complexity measures that refines our theoretical understanding of the model selection performance of BSS under different correlation structures.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods