MetaVD is a Meta Video Dataset for enhancing human action recognition datasets. It provides human-annotated relationship labels between action classes across human action recognition datasets. MetaVD is proposed in the following paper: Yuya Yoshikawa, Yutaro Shigeto, and Akikazu Takeuchi. "MetaVD: A Meta Video Dataset for enhancing human action recognition datasets." Computer Vision and Image Understanding 212 (2021): 103276. [link]
MetaVD integrates the following datasets: UCF101, HMDB51, ActivityNet, STAIR Actions, Charades, Kinetics-700
This repository does NOT provide videos in the datasets. For information on how to download the videos, please refer to the website of each dataset.
Paper | Code | Results | Date | Stars |
---|