LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model

How to efficiently transform large language models (LLMs) into instruction followers is recently a popular research direction, while training LLM for multi-modal reasoning remains less explored. Although the recent LLaMA-Adapter demonstrates the potential to handle visual inputs with LLMs, it still cannot generalize well to open-ended visual instructions and lags behind GPT-4. In this paper, we present LLaMA-Adapter V2, a parameter-efficient visual instruction model. Specifically, we first augment LLaMA-Adapter by unlocking more learnable parameters (e.g., norm, bias and scale), which distribute the instruction-following ability across the entire LLaMA model besides adapters. Secondly, we propose an early fusion strategy to feed visual tokens only into the early LLM layers, contributing to better visual knowledge incorporation. Thirdly, a joint training paradigm of image-text pairs and instruction-following data is introduced by optimizing disjoint groups of learnable parameters. This strategy effectively alleviates the interference between the two tasks of image-text alignment and instruction following and achieves strong multi-modal reasoning with only a small-scale image-text and instruction dataset. During inference, we incorporate additional expert models (e.g. captioning/OCR systems) into LLaMA-Adapter to further enhance its image understanding capability without incurring training costs. Compared to the original LLaMA-Adapter, our LLaMA-Adapter V2 can perform open-ended multi-modal instructions by merely introducing 14M parameters over LLaMA. The newly designed framework also exhibits stronger language-only instruction-following capabilities and even excels in chat interactions. Our code and models are available at https://github.com/ZrrSkywalker/LLaMA-Adapter.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Question Answering ActivityNet-QA LLaMA Adapter V2 Accuracy 34.2 # 28
Confidence score 2.7 # 8
Zero-Shot Video Question Answer ActivityNet-QA LLaMA Adapter Confidence Score 2.7 # 13
Accuracy 34.2 # 14
Visual Question Answering (VQA) InfiMM-Eval LLaMA-Adapter V2 Overall score 30.46 # 6
Deductive 28.7 # 7
Abductive 46.12 # 5
Analogical 22.08 # 5
Params 7B # 1
Visual Question Answering MM-Vet LLaMA-Adapter v2-7B GPT-4 score 31.4±0.1 # 70
Params 7B # 1
Zero-Shot Video Question Answer MSRVTT-QA LLaMA Adapter-7B Accuracy 43.8 # 18
Confidence Score 2.7 # 15
Zero-Shot Video Question Answer MSVD-QA LLaMA Adapter-7B Accuracy 54.9 # 14
Confidence Score 3.1 # 12
Video-based Generative Performance Benchmarking (Correctness of Information) VideoInstruct LLaMA Adapter gpt-score 2.03 # 10
Video-based Generative Performance Benchmarking VideoInstruct LLaMA Adapter Correctness of Information 2.03 # 14
Detail Orientation 2.32 # 14
Contextual Understanding 2.30 # 14
Temporal Understanding 1.98 # 12
Consistency 2.15 # 14
mean 2.16 # 14
Video-based Generative Performance Benchmarking (Temporal Understanding) VideoInstruct LLaMA Adapter gpt-score 1.98 # 8
Video-based Generative Performance Benchmarking (Detail Orientation)) VideoInstruct LLaMA Adapter gpt-score 2.32 # 10
Video-based Generative Performance Benchmarking (Contextual Understanding) VideoInstruct LLaMA Adapter gpt-score 2.30 # 10
Video-based Generative Performance Benchmarking (Consistency) VideoInstruct LLaMA Adapter gpt-score 2.15 # 10

Methods