Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding

5 Jun 2023  ·  Hang Zhang, Xin Li, Lidong Bing ·

We present Video-LLaMA, a multi-modal framework that empowers Large Language Models (LLMs) with the capability of understanding both visual and auditory content in the video. Video-LLaMA bootstraps cross-modal training from the frozen pre-trained visual & audio encoders and the frozen LLMs. Unlike previous vision-LLMs that focus on static image comprehensions such as MiniGPT-4 and LLaVA, Video-LLaMA mainly tackles two challenges in video understanding: (1) capturing the temporal changes in visual scenes, (2) integrating audio-visual signals. To counter the first challenge, we propose a Video Q-former to assemble the pre-trained image encoder into our video encoder and introduce a video-to-text generation task to learn video-language correspondence. For the second challenge, we leverage ImageBind, a universal embedding model aligning multiple modalities as the pre-trained audio encoder, and introduce an Audio Q-former on top of ImageBind to learn reasonable auditory query embeddings for the LLM module. To align the output of both visual & audio encoders with LLM's embedding space, we train Video-LLaMA on massive video/image-caption pairs as well as visual-instruction-tuning datasets of moderate amount but higher quality. We found Video-LLaMA showcases the ability to perceive and comprehend video content, generating meaningful responses that are grounded in the visual and auditory information presented in the videos. This highlights the potential of Video-LLaMA as a promising prototype for audio-visual AI assistants.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Zero-Shot Video Question Answer ActivityNet-QA Video LLaMA 1:1 Accuracy 12.4 # 6
Score 1.1 # 5
Zero-Shot Video Question Answer MSRVTT-QA Video LLaMA 1:1 Accuracy 29.6 # 5
Score 1.8 # 5
Zero-Shot Video Question Answer MSVD-QA Video LLaMA 1:1 Accuracy 51.6 # 5
Score 2.5 # 5
Video-based Generative Performance Benchmarking (Consistency) VideoInstruct Video LLaMA gpt-score 1.79 # 4
Video-based Generative Performance Benchmarking (Temporal Understanding) VideoInstruct Video LLaMA gpt-score 1.82 # 4
Video-based Generative Performance Benchmarking VideoInstruct Video LLaMA Correctness of Information 1.96 # 4
Detail Orientation 2.18 # 4
Contextual Understanding 2.16 # 4
Temporal Understanding 1.82 # 4
Consistency 1.79 # 4
Video-based Generative Performance Benchmarking (Correctness of Information) VideoInstruct Video LLaMA gpt-score 1.96 # 4
Video-based Generative Performance Benchmarking (Detail Orientation)) VideoInstruct Video LLaMA gpt-score 2.18 # 4
Video-based Generative Performance Benchmarking (Contextual Understanding) VideoInstruct Video LLaMA gpt-score 2.16 # 4

Methods