Zero-Shot Action Recognition in Videos: A Survey

13 Sep 2019  ·  Valter Estevam, Helio Pedrini, David Menotti ·

Zero-Shot Action Recognition has attracted attention in the last years and many approaches have been proposed for recognition of objects, events and actions in images and videos. There is a demand for methods that can classify instances from classes that are not present in the training of models, especially in the complex problem of automatic video understanding, since collecting, annotating and labeling videos are difficult and laborious tasks. We have identified that there are many methods available in the literature, however, it is difficult to categorize which techniques can be considered state of the art. Despite the existence of some surveys about zero-shot action recognition in still images and experimental protocol, there is no work focused on videos. Therefore, we present a survey of the methods that comprise techniques to perform visual feature extraction and semantic feature extraction as well to learn the mapping between these features considering specifically zero-shot action recognition in videos. We also provide a complete description of datasets, experiments and protocols, presenting open issues and directions for future work, essential for the development of the computer vision research field.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here