Video Summarization
67 papers with code • 5 benchmarks • 13 datasets
Video Summarization aims to generate a short synopsis that summarizes the video content by selecting its most informative and important parts. The produced summary is usually composed of a set of representative video frames (a.k.a. video key-frames), or video fragments (a.k.a. video key-fragments) that have been stitched in chronological order to form a shorter video. The former type of a video summary is known as video storyboard, and the latter type is known as video skim.
Source: Video Summarization Using Deep Neural Networks: A Survey
Image credit: iJRASET
Datasets
Most implemented papers
Video Summarization with Long Short-term Memory
We propose a novel supervised learning technique for summarizing videos by automatically selecting keyframes or key subshots.
Temporal Tessellation: A Unified Approach for Video Analysis
A test video is processed by forming correspondences between its clips and the clips of reference videos with known semantics, following which, reference semantics can be transferred to the test video.
Query-adaptive Video Summarization via Quality-aware Relevance Estimation
Although the problem of automatic video summarization has recently received a lot of attention, the problem of creating a video summary that also highlights elements relevant to a search query has been less studied.
Unsupervised Video Summarization With Adversarial LSTM Networks
The summarizer is the autoencoder long short-term memory network (LSTM) aimed at, first, selecting video frames, and then decoding the obtained summarization for reconstructing the input video.
FFNet: Video Fast-Forwarding via Reinforcement Learning
The first group is supported by video summarization techniques, which require processing of the entire video to select an important subset for showing to users.
Vis-DSS: An Open-Source toolkit for Visual Data Selection and Summarization
With increasing amounts of visual data being created in the form of videos and images, visual data selection and summarization are becoming ever increasing problems.
Discriminative Feature Learning for Unsupervised Video Summarization
The proposed variance loss allows a network to predict output scores for each frame with high discrepancy which enables effective feature learning and significantly improves model performance.
Multi-Stream Dynamic Video Summarization
With vast amounts of video content being uploaded to the Internet every minute, video summarization becomes critical for efficient browsing, searching, and indexing of visual content.
A Mobile Robot Generating Video Summaries of Seniors' Indoor Activities
We develop a system which generates summaries from seniors' indoor-activity videos captured by a social robot to help remote family members know their seniors' daily activities at home.
A Stepwise, Label-based Approach for Improving the Adversarial Training in Unsupervised Video Summarization
In this paper we present our work on improving the efficiency of adversarial training for unsupervised video summarization.