Language Models

GLM

Introduced by Zeng et al. in GLM-130B: An Open Bilingual Pre-trained Model

GLM is a bilingual (English and Chinese) pre-trained transformer-based language model that follow the traditional architecture of decoder-only autoregressive language modeling. It leverages autoregressive blank infilling as its training objective.

Source: GLM-130B: An Open Bilingual Pre-trained Model

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Language Modelling 6 12.24%
Quantization 3 6.12%
Retrieval 2 4.08%
Diversity 2 4.08%
Question Answering 2 4.08%
Denoising 2 4.08%
Large Language Model 2 4.08%
parameter-efficient fine-tuning 2 4.08%
Semantic Segmentation 2 4.08%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories