Language Models

LLaMA

Introduced by Touvron et al. in LLaMA: Open and Efficient Foundation Language Models

LLaMA is a collection of foundation language models ranging from 7B to 65B parameters. It is based on the transformer architecture with various improvements that were subsequently proposed. The main difference with the original architecture are listed below.

  • RMSNorm normalizing function is used to improve the training stability, by normalizing the input of each transformer sub-layer, instead of normalizing the output.
  • The ReLU non-linearity is replaced by the SwiGLU activation function to improve performance.
  • Absolute positional embeddings are removed and instead rotary positional embeddings (RoPE) are added at each layer of the network.
Source: LLaMA: Open and Efficient Foundation Language Models

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Language Modelling 106 13.55%
Large Language Model 60 7.67%
Quantization 35 4.48%
Question Answering 34 4.35%
In-Context Learning 23 2.94%
Text Generation 23 2.94%
Code Generation 21 2.69%
Instruction Following 19 2.43%
Retrieval 17 2.17%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories