Dense Video Captioning
25 papers with code • 4 benchmarks • 7 datasets
Most natural videos contain numerous events. For example, in a video of a “man playing a piano”, the video might also contain “another man dancing” or “a crowd clapping”. The task of dense video captioning involves both detecting and describing events in a video.
Latest papers
Unifying Event Detection and Captioning as Sequence Generation via Pre-Training
Dense video captioning aims to generate corresponding text descriptions for a series of events in the untrimmed video, which can be divided into two sub-tasks, event detection and event captioning.
Dense Video Captioning Using Unsupervised Semantic Information
We introduce a method to learn unsupervised semantic visual information based on the premise that complex events (e. g., minutes) can be decomposed into simpler events (e. g., a few seconds), and that these simple events are shared across several complex events.
End-to-End Dense Video Captioning with Parallel Decoding
Dense video captioning aims to generate multiple associated captions with their temporal locations from the video.
Global Object Proposals for Improving Multi-Sentence Video Descriptions
Recently, many works are proposed on the generation of multi-sentence video descriptions.
TSP: Temporally-Sensitive Pretraining of Video Encoders for Localization Tasks
Extensive experiments show that using features trained with our novel pretraining strategy significantly improves the performance of recent state-of-the-art methods on three tasks: Temporal Action Localization, Action Proposal Generation, and Dense Video Captioning.
Multimodal Pretraining for Dense Video Captioning
First, we construct and release a new dense video captioning dataset, Video Timeline Tags (ViTT), featuring a variety of instructional videos together with time-stamped annotations.
Dense-Captioning Events in Videos: SYSU Submission to ActivityNet Challenge 2020
This technical report presents a brief description of our submission to the dense video captioning task of ActivityNet Challenge 2020.
A Better Use of Audio-Visual Cues: Dense Video Captioning with Bi-modal Transformer
We show the effectiveness of the proposed model with audio and visual modalities on the dense video captioning task, yet the module is capable of digesting any two modalities in a sequence-to-sequence task.
Multi-modal Dense Video Captioning
We apply automatic speech recognition (ASR) system to obtain a temporally aligned textual description of the speech (similar to subtitles) and treat it as a separate input alongside video frames and the corresponding audio track.
Streamlined Dense Video Captioning
Dense video captioning is an extremely challenging task since accurate and coherent description of events in a video requires holistic understanding of video contents as well as contextual reasoning of individual events.