PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance

4 Nov 2024  ยท  Ruyang Liu, Haoran Tang, Haibo Liu, Yixiao Ge, Ying Shan, Chen Li, Jiankun Yang ยท

The past year has witnessed the significant advancement of video-based large language models. However, the challenge of developing a unified model for both short and long video understanding remains unresolved. Most existing video LLMs cannot handle hour-long videos, while methods custom for long videos tend to be ineffective for shorter videos and images. In this paper, we identify the key issue as the redundant content in videos. To address this, we propose a novel pooling strategy that simultaneously achieves token compression and instruction-aware visual feature aggregation. Our model is termed Prompt-guided Pooling LLaVA, or PPLLaVA for short. Specifically, PPLLaVA consists of three core components: the CLIP-based visual-prompt alignment that extracts visual information relevant to the user's instructions, the prompt-guided pooling that compresses the visual sequence to arbitrary scales using convolution-style pooling, and the clip context extension designed for lengthy prompt common in visual dialogue. Moreover, our codebase also integrates the most advanced video Direct Preference Optimization (DPO) and visual interleave training. Extensive experiments have validated the performance of our model. With superior throughput and only 1024 visual context, PPLLaVA achieves better results on image benchmarks as a video LLM, while achieving state-of-the-art performance across various video benchmarks, excelling in tasks ranging from caption generation to multiple-choice questions, and handling video lengths from seconds to hours. Codes have been available at https://github.com/farewellthree/PPLLaVA.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Zero-Shot Video Question Answer ActivityNet-QA PPLLaVA-7B Confidence Score 3.6 # 3
Accuracy 60.7 # 3
Zero-Shot Video Question Answer MSRVTT-QA PPLLaVA-7B Accuracy 64.3 # 7
Confidence Score 3.5 # 6
Zero-Shot Video Question Answer MSVD-QA PPLLaVA-7B Accuracy 77.1 # 8
Confidence Score 4.0 # 6
Video Question Answering MVBench PPLLaVA (7b) Avg. 59.2 # 7
Video-based Generative Performance Benchmarking (Correctness of Information) VideoInstruct PPLLaVA-7B gpt-score 3.85 # 1
Video-based Generative Performance Benchmarking VideoInstruct PPLLaVA-7B Correctness of Information 3.32 # 6
Detail Orientation 3.20 # 3
Contextual Understanding 3.88 # 4
Temporal Understanding 3.0 # 3
Consistency 3.20 # 5
mean 3.32 # 4
Video-based Generative Performance Benchmarking (Consistency) VideoInstruct PPLLaVA-7B gpt-score 3.81 # 1
Video-based Generative Performance Benchmarking VideoInstruct PPLLaVA-7B-dpo Correctness of Information 3.85 # 1
Detail Orientation 3.56 # 1
Contextual Understanding 4.21 # 1
Temporal Understanding 3.21 # 2
Consistency 3.81 # 1
mean 3.73 # 1
Video-based Generative Performance Benchmarking (Temporal Understanding) VideoInstruct PPLLaVA-7B gpt-score 3.21 # 1
Video-based Generative Performance Benchmarking (Contextual Understanding) VideoInstruct PPLLaVA-7B gpt-score 4.21 # 1
Video-based Generative Performance Benchmarking (Detail Orientation)) VideoInstruct PPLLaVA-7B gpt-score 3.56 # 1

Methods