Optimal ridge penalty for real-world high-dimensional data can be zero or negative due to the implicit ridge regularization

28 May 2018 Dmitry Kobak Jonathan Lomond Benoit Sanchez

A conventional wisdom in statistical learning is that large models require strong regularization to prevent overfitting. Here we show that this rule can be violated by linear regression in the underdetermined $n\ll p$ situation under realistic conditions... (read more)

PDF Abstract

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper