End-to-end Dense Video Captioning as Sequence Generation

Dense video captioning aims to identify the events of interest in an input video, and generate descriptive captions for each event. Previous approaches usually follow a two-stage generative process, which first proposes a segment for each event, then renders a caption for each identified segment. Recent advances in large-scale sequence generation pretraining have seen great success in unifying task formulation for a great variety of tasks, but so far, more complex tasks such as dense video captioning are not able to fully utilize this powerful paradigm. In this work, we show how to model the two subtasks of dense video captioning jointly as one sequence generation task, and simultaneously predict the events and the corresponding descriptions. Experiments on YouCook2 and ViTT show encouraging results and indicate the feasibility of training complex tasks such as end-to-end dense video captioning integrated into large-scale pretrained models.

PDF Abstract COLING 2022 PDF COLING 2022 Abstract

Results from the Paper


Ranked #3 on Dense Video Captioning on ViTT (CIDEr metric, using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Dense Video Captioning ViTT E2ESG CIDEr 25.0 # 3
METEOR 8.1 # 3

Methods


No methods listed for this paper. Add relevant methods here