LLaMA is a collection of foundation language models ranging from 7B to 65B parameters. It is based on the transformer architecture with various improvements that were subsequently proposed. The main difference with the original architecture are listed below.
Paper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Language Modelling | 62 | 7.89% |
Language Modeling | 60 | 7.63% |
Large Language Model | 52 | 6.62% |
Quantization | 29 | 3.69% |
RAG | 27 | 3.44% |
Question Answering | 24 | 3.05% |
Retrieval | 22 | 2.80% |
Code Generation | 17 | 2.16% |
Text Generation | 16 | 2.04% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |