MOD20 is an action recognition dataset consisting of videos collected from YouTube and our own drone. The dataset contains 2,324 videos lasting a total of 240 minutes. The actions were selected from challenging and complex scenarios, and cover multiple viewpoints, from ground-level to bird's-eye view. The substantial variation in body size, number of people, viewpoints, camera motion, and background makes this dataset challenging for action recognition. The action classes, 720×720 size un-distorted clips and multi-viewpoint video selection extend the dataset's applicability to a wider research community.
Paper | Code | Results | Date | Stars |
---|