PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning

arXiv 2024  ยท  Lin Xu, Yilin Zhao, Daquan Zhou, Zhijie Lin, See Kiong Ng, Jiashi Feng ยท

Vision-language pre-training has significantly elevated performance across a wide range of image-language applications. Yet, the pre-training process for video-related tasks demands exceptionally large computational and data resources, which hinders the progress of video-language models. This paper investigates a straight-forward, highly efficient, and resource-light approach to adapting an existing image-language pre-trained model for dense video understanding. Our preliminary experiments reveal that directly fine-tuning pre-trained image-language models with multiple frames as inputs on video datasets leads to performance saturation or even a drop. Our further investigation reveals that it is largely attributed to the bias of learned high-norm visual features. Motivated by this finding, we propose a simple but effective pooling strategy to smooth the feature distribution along the temporal dimension and thus reduce the dominant impacts from the extreme features. The new model is termed Pooling LLaVA, or PLLaVA in short. PLLaVA achieves new state-of-the-art performance on modern benchmark datasets for both video question-answer and captioning tasks. Notably, on the recent popular VideoChatGPT benchmark, PLLaVA achieves a score of 3.48 out of 5 on average of five evaluated dimensions, exceeding the previous SOTA results from GPT4V (IG-VLM) by 9%. On the latest multi-choice benchmark MVBench, PLLaVA achieves 58.1% accuracy on average across 20 sub-tasks, 14.5% higher than GPT4V (IG-VLM). Code is available at https://pllava.github.io/

PDF Abstract arXiv 2024 PDF
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Zero-Shot Video Question Answer ActivityNet-QA PLLaVA Confidence Score 3.7 # 1
Accuracy 60.9 # 1
Zero-Shot Video Question Answer MSRVTT-QA PLLaVA Accuracy 68.7 # 1
Confidence Score 3.6 # 1
Zero-Shot Video Question Answer MSVD-QA PLLaVA Accuracy 79.9 # 1
Confidence Score 4.2 # 1
Video Question Answering MVBench PLLaVA Avg. 58.1 # 1
Zero-Shot Video Question Answer TGIF-QA PLLaVA Accuracy 80.6 # 1
Confidence Score 4.3 # 1
Video-based Generative Performance Benchmarking (Detail Orientation)) VideoInstruct PLLaVA gpt-score 3.20 # 1
Video-based Generative Performance Benchmarking (Contextual Understanding) VideoInstruct PLLaVA gpt-score 3.9 # 1
Video-based Generative Performance Benchmarking (Correctness of Information) VideoInstruct PLLaVA gpt-score 3.60 # 1
Video-based Generative Performance Benchmarking (Temporal Understanding) VideoInstruct PLLaVA gpt-score 2.67 # 2
Video-based Generative Performance Benchmarking (Consistency) VideoInstruct PLLaVA gpt-score 3.25 # 1
Video-based Generative Performance Benchmarking VideoInstruct PLLaVA-34B Correctness of Information 3.60 # 1
Detail Orientation 3.20 # 1
Contextual Understanding 3.90 # 1
Temporal Understanding 2.67 # 5
Consistency 3.25 # 1
mean 3.48 # 1

Methods


No methods listed for this paper. Add relevant methods here