VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding

13 Jun 2024  ·  Muhammad Maaz, Hanoona Rasheed, Salman Khan, Fahad Khan ·

Building on the advances of language models, Large Multimodal Models (LMMs) have contributed significant improvements in video understanding. While the current video LMMs utilize advanced Large Language Models (LLMs), they rely on either image or video encoders to process visual inputs, each of which has its own limitations. Image encoders excel at capturing rich spatial details from frame sequences but lack explicit temporal context, which can be important in videos with intricate action sequences. On the other hand, video encoders provide temporal context but are often limited by computational constraints that lead to processing only sparse frames at lower resolutions, resulting in reduced contextual and spatial understanding. To this end, we introduce VideoGPT+, which combines the complementary benefits of the image encoder (for detailed spatial understanding) and the video encoder (for global temporal context modeling). The model processes videos by dividing them into smaller segments and applies an adaptive pooling strategy on features extracted by both image and video encoders. Our architecture showcases improved performance across multiple video benchmarks, including VCGBench, MVBench and Zero-shot question-answering. Further, we develop 112K video-instruction set using a novel semi-automatic annotation pipeline which further improves the model performance. Additionally, to comprehensively evaluate video LMMs, we present VCGBench-Diverse, covering 18 broad video categories such as lifestyle, sports, science, gaming, and surveillance videos. This benchmark with 4,354 question-answer pairs evaluates the generalization of existing LMMs on dense video captioning, spatial and temporal understanding, and complex reasoning, ensuring comprehensive assessment across diverse video types and dynamics. Code: https://github.com/mbzuai-oryx/VideoGPT-plus.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Zero-Shot Video Question Answer ActivityNet-QA VideoGPT+ Confidence Score 3.6 # 3
Accuracy 50.6 # 6
Zero-Shot Video Question Answer MSRVTT-QA VideoGPT+ Accuracy 60.6 # 8
Confidence Score 3.6 # 2
Zero-Shot Video Question Answer MSVD-QA VideoGPT+ Accuracy 72.4 # 10
Confidence Score 3.6 # 13
Video Question Answering MVBench VideoGPT+ Avg. 58.7 # 3
Zero-Shot Video Question Answer TGIF-QA VideoGPT+ Accuracy 74.6 # 4
Confidence Score 4.1 # 4
VCGBench-Diverse VideoInstruct VideoGPT+ mean 2.47 # 1
Correctness of Information 2.46 # 1
Detail Orientation 2.73 # 1
Contextual Understanding 2.81 # 1
Temporal Understanding 1.78 # 1
Consistency 2.59 # 1
Dense Captioning 1.38 # 1
Spatial Understanding 2.80 # 1
Reasoning 3.63 # 1
Video-based Generative Performance Benchmarking (Consistency) VideoInstruct VideoGPT+ gpt-score 3.39 # 1
Video-based Generative Performance Benchmarking VideoInstruct VideoGPT+ Correctness of Information 3.27 # 5
Detail Orientation 3.18 # 3
Contextual Understanding 3.74 # 3
Temporal Understanding 2.83 # 4
Consistency 3.39 # 1
mean 3.28 # 3
Video-based Generative Performance Benchmarking (Temporal Understanding) VideoInstruct VideoGPT+ gpt-score 2.83 # 2
Video-based Generative Performance Benchmarking (Detail Orientation)) VideoInstruct VideoGPT+ gpt-score 3.18 # 2
Video-based Generative Performance Benchmarking (Correctness of Information) VideoInstruct VideoGPT+ gpt-score 3.27 # 3
Video-based Generative Performance Benchmarking (Contextual Understanding) VideoInstruct VideoGPT+ gpt-score 3.74 # 2

Methods


No methods listed for this paper. Add relevant methods here