GLM is a bilingual (English and Chinese) pre-trained transformer-based language model that follow the traditional architecture of decoder-only autoregressive language modeling. It leverages autoregressive blank infilling as its training objective.
Source: GLM-130B: An Open Bilingual Pre-trained ModelPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Language Modelling | 6 | 12.24% |
Quantization | 3 | 6.12% |
Retrieval | 2 | 4.08% |
Diversity | 2 | 4.08% |
Question Answering | 2 | 4.08% |
Denoising | 2 | 4.08% |
Large Language Model | 2 | 4.08% |
parameter-efficient fine-tuning | 2 | 4.08% |
Semantic Segmentation | 2 | 4.08% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |