$L_{1}$ Regularization is a regularization technique applied to the weights of a neural network. We minimize a loss function compromising both the primary loss function and a penalty on the $L_{1}$ Norm of the weights:
$$L_{new}\left(w\right) = L_{original}\left(w\right) + \lambda{||w||}_{1}$$
where $\lambda$ is a value determining the strength of the penalty. In contrast to weight decay, $L_{1}$ regularization promotes sparsity; i.e. some parameters have an optimal value of zero.
Image Source: Wikipedia
Paper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Language Modelling | 5 | 6.02% |
Speech Synthesis | 4 | 4.82% |
Time Series Analysis | 3 | 3.61% |
BIG-bench Machine Learning | 3 | 3.61% |
Test | 3 | 3.61% |
Object Detection | 3 | 3.61% |
Speech Recognition | 2 | 2.41% |
Image Segmentation | 2 | 2.41% |
Medical Image Segmentation | 2 | 2.41% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |