LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models

28 Nov 2023  ·  Yanwei Li, Chengyao Wang, Jiaya Jia ·

In this work, we present a novel method to tackle the token generation challenge in Vision Language Models (VLMs) for video and image understanding, called LLaMA-VID. Current VLMs, while proficient in tasks like image captioning and visual question answering, face computational burdens when processing long videos due to the excessive visual tokens. LLaMA-VID addresses this issue by representing each frame with two distinct tokens, namely context token and content token. The context token encodes the overall image context based on user input, whereas the content token encapsulates visual cues in each frame. This dual-token strategy significantly reduces the overload of long videos while preserving critical information. Generally, LLaMA-VID empowers existing frameworks to support hour-long videos and pushes their upper limit with an extra context token. It is proved to surpass previous methods on most of video- or image-based benchmarks. Code is available https://github.com/dvlab-research/LLaMA-VID}{https://github.com/dvlab-research/LLaMA-VID

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Question Answering ActivityNet-QA LLaMA-VID-13B (2 Token) Accuracy 47.5 # 2
Confidence score 3.3 # 2
Zero-Shot Video Question Answer ActivityNet-QA LLaMA-VID-13B (2 Token) Confidence Score 3.3 # 2
Accuracy 47.5 # 2
Zero-Shot Video Question Answer ActivityNet-QA LLaMA-VID-7B (2 Token) Confidence Score 3.3 # 2
Accuracy 47.4 # 3
Video Question Answering ActivityNet-QA LLaMA-VID-7B (2 Token) Accuracy 47.4 # 3
Confidence score 3.3 # 2
Zero-Shot Video Question Answer MSRVTT-QA LLaMA-VID-13B (2 Token) Accuracy 58.9 # 3
Confidence Score 3.3 # 2
Zero-Shot Video Question Answer MSRVTT-QA LLaMA-VID-7B (2 Token) Accuracy 57.7 # 4
Confidence Score 3.2 # 6
Zero-Shot Video Question Answer MSVD-QA LLaMA-VID-7B (2 Token) Accuracy 69.7 # 5
Confidence Score 3.7 # 3
Zero-Shot Video Question Answer MSVD-QA LLaMA-VID-13B (2 Token) Accuracy 70.0 # 3
Confidence Score 3.7 # 3
Video-based Generative Performance Benchmarking VideoInstruct LLaMA-VID-7B (2 Token) Correctness of Information 2.96 # 3
Detail Orientation 3.00 # 3
Contextual Understanding 3.53 # 2
Temporal Understanding 2.46 # 5
Consistency 2.51 # 4
mean 2.89 # 4
Video-based Generative Performance Benchmarking VideoInstruct LLaMA-VID-13B (2 Token) Correctness of Information 3.07 # 1
Detail Orientation 3.05 # 2
Contextual Understanding 3.60 # 1
Temporal Understanding 2.58 # 3
Consistency 2.63 # 3
mean 2.99 # 1

Methods


No methods listed for this paper. Add relevant methods here