Linear Warmup With Linear Decay is a learning rate schedule in which we increase the learning rate linearly for $n$ updates and then linearly decay afterwards.
Paper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Language Modelling | 111 | 12.66% |
Retrieval | 86 | 9.81% |
Question Answering | 48 | 5.47% |
Text Classification | 37 | 4.22% |
Sentence | 35 | 3.99% |
Large Language Model | 34 | 3.88% |
Sentiment Analysis | 31 | 3.53% |
NER | 19 | 2.17% |
Text Generation | 18 | 2.05% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |