Pattern-Exploiting Training is a semi-supervised training procedure that reformulates input examples as cloze-style phrases to help language models understand a given task. These phrases are then used to assign soft labels to a large set of unlabeled examples. Finally, standard supervised training is performed on the resulting training set.
In the case of PET for sentiment classification, first a number of patterns encoding some form of task description are created to convert training examples to cloze questions; for each pattern, a pretrained language model is finetuned. Secondly, the ensemble of trained models annotates unlabeled data. Lastly, a classifier is trained on the resulting soft-labeled dataset.
Source: Exploiting Cloze Questions for Few Shot Text Classification and Natural Language InferencePaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Text Classification | 3 | 15.79% |
Few-Shot Learning | 2 | 10.53% |
Few-Shot Text Classification | 2 | 10.53% |
Language Modelling | 2 | 10.53% |
Natural Language Inference | 2 | 10.53% |
Artifact Detection | 1 | 5.26% |
Blood Detection | 1 | 5.26% |
Color Normalization | 1 | 5.26% |
Damaged Tissue Detection | 1 | 5.26% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |