Neural networks trained on datasets such as ImageNet have led to major advances in visual object classification. One obstacle that prevents networks from reasoning more deeply about complex scenes and situations, and from integrating visual knowledge with natural language, like humans do, is their lack of common sense knowledge about the physical world. Videos, unlike still images, contain a wealth of detailed information about the physical world. However, most labelled video datasets represent high-level concepts rather than detailed physical aspects about actions and scenes. In this work, we describe our ongoing collection of the "something-something" database of video prediction tasks whose solutions require a common sense understanding of the depicted situation. The database currently contains more than 100,000 videos across 174 classes, which are defined as caption-templates. We also describe the challenges in crowd-sourcing this data at scale.

PDF Abstract ICCV 2017 PDF ICCV 2017 Abstract

Datasets


Introduced in the Paper:

Something-Something V2 Something-Something V1

Used in the Paper:

ImageNet

Results from Other Papers


Task Dataset Model Metric Name Metric Value Rank Source Paper Compare
Action Recognition Something-Something V2 model3D_1 with left-right augmentation and fps jitter Top-1 Accuracy 51.33 # 115
Top-5 Accuracy 80.46 # 84

Methods


No methods listed for this paper. Add relevant methods here