Video Captioning

162 papers with code • 11 benchmarks • 32 datasets

Video Captioning is a task of automatic captioning a video by understanding the action and event in the video which can help in the retrieval of the video efficiently through text.

Source: NITS-VC System for VATEX Video Captioning Challenge 2020

Libraries

Use these libraries to find Video Captioning models and implementations

Narrative Action Evaluation with Prompt-Guided Multimodal Interaction

shiyi-zh0408/nae_cvpr2024 22 Apr 2024

NAE is a more challenging task because it requires both narrative flexibility and evaluation rigor.

13
22 Apr 2024

Movie101v2: Improved Movie Narration Benchmark

yuezih/movie101 20 Apr 2024

Automatic movie narration targets at creating video-aligned plot descriptions to assist visually impaired audiences.

35
20 Apr 2024

TrafficVLM: A Controllable Visual Language Model for Traffic Video Captioning

quangminhdinh/trafficvlm 14 Apr 2024

Traffic video description and analysis have received much attention recently due to the growing demand for efficient and reliable urban surveillance systems.

13
14 Apr 2024

Do You Remember? Dense Video Captioning with Cross-Modal Memory Retrieval

faceonlive/ai-research 11 Apr 2024

There has been significant attention to the research on dense video captioning, which aims to automatically localize and caption all events within untrimmed video.

152
11 Apr 2024

MA-LMM: Memory-Augmented Large Multimodal Model for Long-Term Video Understanding

boheumd/MA-LMM 8 Apr 2024

However, existing LLM-based large multimodal models (e. g., Video-LLaMA, VideoChat) can only take in a limited number of frames for short video understanding.

112
08 Apr 2024

Streaming Dense Video Captioning

google-research/scenic 1 Apr 2024

An ideal model for dense video captioning -- predicting captions localized temporally in a video -- should be able to handle long input videos, predict rich, detailed textual descriptions, and be able to produce outputs before processing the entire video.

2,996
01 Apr 2024

OmniVid: A Generative Framework for Universal Video Understanding

wangjk666/omnivid 26 Mar 2024

The core of video understanding tasks, such as recognition, captioning, and tracking, is to automatically detect objects or actions in a video and analyze their temporal evolution.

18
26 Mar 2024

LVCHAT: Facilitating Long Video Comprehension

wangyu-ustc/lvchat 19 Feb 2024

To address this issue, we propose Long Video Chat (LVChat), where Frame-Scalable Encoding (FSE) is introduced to dynamically adjust the number of embeddings in alignment with the duration of the video to ensure long videos are not overly compressed into a few embeddings.

9
19 Feb 2024

Connect, Collapse, Corrupt: Learning Cross-Modal Tasks with Uni-Modal Data

yuhui-zh15/c3 16 Jan 2024

However, this assumption is under-explored due to the poorly understood geometry of the multi-modal contrastive space, where a modality gap exists.

20
16 Jan 2024

A Recipe for Scaling up Text-to-Video Generation with Text-free Videos

ali-vilab/i2vgen-xl 25 Dec 2023

Following such a pipeline, we study the effect of doubling the scale of training set (i. e., video-only WebVid10M) with some randomly collected text-free videos and are encouraged to observe the performance improvement (FID from 9. 67 to 8. 19 and FVD from 484 to 441), demonstrating the scalability of our approach.

5
25 Dec 2023