DistilBERT is a small, fast, cheap and light Transformer model based on the BERT architecture. Knowledge distillation is performed during the pre-training phase to reduce the size of a BERT model by 40%. To leverage the inductive biases learned by larger models during pre-training, the authors introduce a triple loss combining language modeling, distillation and cosine-distance losses.
Source: DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighterPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Sentiment Analysis | 16 | 7.96% |
Language Modelling | 15 | 7.46% |
Classification | 14 | 6.97% |
Question Answering | 12 | 5.97% |
Text Classification | 12 | 5.97% |
Model Compression | 6 | 2.99% |
Quantization | 6 | 2.99% |
General Classification | 6 | 2.99% |
Hate Speech Detection | 4 | 1.99% |