|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Video summarization aims to facilitate large-scale video browsing by producing short, concise summaries that are diverse and representative of original videos.
#2 best model for Supervised Video Summarization on SumMe
A test video is processed by forming correspondences between its clips and the clips of reference videos with known semantics, following which, reference semantics can be transferred to the test video.
Although the problem of automatic video summarization has recently received a lot of attention, the problem of creating a video summary that also highlights elements relevant to a search query has been less studied.
Video summarization is a challenging under-constrained problem because the underlying summary of a single video strongly depends on users' subjective understandings.
With increasing amounts of visual data being created in the form of videos and images, visual data selection and summarization are becoming ever increasing problems.
In our algorithm, at each iteration, the maximum information from the structure of the data is captured by one selected sample, and the captured information is neglected in the next iterations by projection on the null-space of previously selected samples.