DeepSeek-VL2: Mixture-of-Experts Vision-Language Models for Advanced Multimodal Understanding
We present DeepSeek-VL2, an advanced series of large Mixture-of-Experts (MoE) Vision-Language Models that significantly improves upon its predecessor, DeepSeek-VL, through two key major upgrades. For the vision component, we incorporate a dynamic tiling vision encoding strategy designed for processing high-resolution images with different aspect ratios. For the language component, we leverage DeepSeekMoE models with the Multi-head Latent Attention mechanism, which compresses Key-Value cache into latent vectors, to enable efficient inference and high throughput. Trained on an improved vision-language dataset, DeepSeek-VL2 demonstrates superior capabilities across various tasks, including but not limited to visual question answering, optical character recognition, document/table/chart understanding, and visual grounding. Our model series is composed of three variants: DeepSeek-VL2-Tiny, DeepSeek-VL2-Small and DeepSeek-VL2, with 1.0B, 2.8B and 4.5B activated parameters respectively. DeepSeek-VL2 achieves competitive or state-of-the-art performance with similar or fewer activated parameters compared to existing open-source dense and MoE-based models. Codes and pre-trained models are publicly accessible at https://github.com/deepseek-ai/DeepSeek-VL2.
PDF AbstractCode
Results from the Paper
Task | Dataset | Model | Metric Name | Metric Value | Global Rank | Benchmark |
---|---|---|---|---|---|---|
Referring Expression Comprehension | RefCoco+ | DeepSeek-VL2 | Val | 91.2 | # 1 | |
Test A | 94.9 | # 1 | ||||
Test B | 87.4 | # 1 | ||||
Referring Expression Comprehension | RefCOCO | DeepSeek-VL2 | Val | 95.1 | # 1 | |
Test A | 96.7 | # 1 | ||||
Test B | 95.1 | # 1 | ||||
Referring Expression Comprehension | RefCOCOg-test | DeepSeek-VL2 | Accuracy | 92.9 | # 1 | |
Referring Expression Comprehension | RefCOCOg-val | DeepSeek-VL2 | Accuracy | 92.8 | # 1 |