$L_{1}$ Regularization is a regularization technique applied to the weights of a neural network. We minimize a loss function compromising both the primary loss function and a penalty on the $L_{1}$ Norm of the weights:
$$L_{new}\left(w\right) = L_{original}\left(w\right) + \lambda{||w||}_{1}$$
where $\lambda$ is a value determining the strength of the penalty. In contrast to weight decay, $L_{1}$ regularization promotes sparsity; i.e. some parameters have an optimal value of zero.
Image Source: Wikipedia
Paper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Language Modelling | 6 | 4.48% |
Language Modeling | 5 | 3.73% |
Text to Speech | 4 | 2.99% |
Speech Synthesis | 4 | 2.99% |
Speech Recognition | 3 | 2.24% |
regression | 3 | 2.24% |
Translation | 3 | 2.24% |
Feature Engineering | 3 | 2.24% |
Time Series Analysis | 3 | 2.24% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |