RMSNorm regularizes the summed inputs to a neuron in one layer according to root mean square (RMS), giving the model re-scaling invariance property and implicit learning rate adaptation ability. RMSNorm is computationally simpler and thus more efficient than LayerNorm.
Source: Root Mean Square Layer NormalizationPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Language Modelling | 1 | 14.29% |
Arithmetic Reasoning | 1 | 14.29% |
Code Generation | 1 | 14.29% |
Math Word Problem Solving | 1 | 14.29% |
Multiple Choice Question Answering (MCQA) | 1 | 14.29% |
Multi-task Language Understanding | 1 | 14.29% |
Question Answering | 1 | 14.29% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |