ALBERT is a Transformer architecture based on BERT but with much fewer parameters. It achieves this through two parameter reduction techniques. The first is a factorized embeddings parameterization. By decomposing the large vocabulary embedding matrix into two small matrices, the size of the hidden layers is separated from the size of vocabulary embedding. This makes it easier to grow the hidden size without significantly increasing the parameter size of the vocabulary embeddings. The second technique is cross-layer parameter sharing. This technique prevents the parameter from growing with the depth of the network.
Additionally, ALBERT utilises a self-supervised loss for sentence-order prediction (SOP). SOP primary focuses on inter-sentence coherence and is designed to address the ineffectiveness of the next sentence prediction (NSP) loss proposed in the original BERT.
Source: ALBERT: A Lite BERT for Self-supervised Learning of Language RepresentationsPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Language Modelling | 29 | 9.93% |
Sentence | 23 | 7.88% |
Text Classification | 15 | 5.14% |
Question Answering | 13 | 4.45% |
Sentiment Analysis | 13 | 4.45% |
Named Entity Recognition (NER) | 10 | 3.42% |
NER | 8 | 2.74% |
Reading Comprehension | 7 | 2.40% |
Natural Language Understanding | 6 | 2.05% |