# Optimal ridge penalty for real-world high-dimensional data can be zero or negative due to the implicit ridge regularization

28 May 2018

A conventional wisdom in statistical learning is that large models require strong regularization to prevent overfitting. Here we show that this rule can be violated by linear regression in the underdetermined $n\ll p$ situation under realistic conditions... (read more)

PDF Abstract