Texture-Based Input Feature Selection for Action Recognition

28 Feb 2023  ·  Yalong Jiang ·

The performance of video action recognition has been significantly boosted by using motion representations within a two-stream Convolutional Neural Network (CNN) architecture. However, there are a few challenging problems in action recognition in real scenarios, e.g., the variations in viewpoints and poses, and the changes in backgrounds. The domain discrepancy between the training data and the test data causes the performance drop. To improve the model robustness, we propose a novel method to determine the task-irrelevant content in inputs which increases the domain discrepancy. The method is based on a human parsing model (HP model) which jointly conducts dense correspondence labelling and semantic part segmentation. The predictions from the HP model also function as re-rendering the human regions in each video using the same set of textures to make humans appearances in all classes be the same. A revised dataset is generated for training and testing and makes the action recognition model exhibit invariance to the irrelevant content in the inputs. Moreover, the predictions from the HP model are used to enrich the inputs to the AR model during both training and testing. Experimental results show that our proposed model is superior to existing models for action recognition on the HMDB-51 dataset and the Penn Action dataset.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods