UL2 is a unified framework for pretraining models that are universally effective across datasets and setups. UL2 uses Mixture-of-Denoisers (MoD), a pre-training objective that combines diverse pre-training paradigms together. UL2 introduces a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training schemes.
Source: UL2: Unifying Language Learning ParadigmsPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Question Answering | 5 | 14.29% |
Retrieval | 3 | 8.57% |
Language Modeling | 2 | 5.71% |
Language Modelling | 2 | 5.71% |
Relation Extraction | 2 | 5.71% |
Natural Language Inference | 2 | 5.71% |
Decoder | 1 | 2.86% |
Natural Language Understanding | 1 | 2.86% |
Named Entity Recognition (NER) | 1 | 2.86% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |