LLaMA is a collection of foundation language models ranging from 7B to 65B parameters. It is based on the transformer architecture with various improvements that were subsequently proposed. The main difference with the original architecture are listed below.
Paper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Language Modelling | 106 | 13.55% |
Large Language Model | 60 | 7.67% |
Quantization | 35 | 4.48% |
Question Answering | 34 | 4.35% |
In-Context Learning | 23 | 2.94% |
Text Generation | 23 | 2.94% |
Code Generation | 21 | 2.69% |
Instruction Following | 19 | 2.43% |
Retrieval | 17 | 2.17% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |