Knowledge-Guided Deep Fractal Neural Networks for Human Pose Estimation

5 May 2017  ·  Guanghan Ning, Zhi Zhang, Zhihai He ·

Human pose estimation using deep neural networks aims to map input images with large variations into multiple body keypoints which must satisfy a set of geometric constraints and inter-dependency imposed by the human body model. This is a very challenging nonlinear manifold learning process in a very high dimensional feature space. We believe that the deep neural network, which is inherently an algebraic computation system, is not the most effecient way to capture highly sophisticated human knowledge, for example those highly coupled geometric characteristics and interdependence between keypoints in human poses. In this work, we propose to explore how external knowledge can be effectively represented and injected into the deep neural networks to guide its training process using learned projections that impose proper prior. Specifically, we use the stacked hourglass design and inception-resnet module to construct a fractal network to regress human pose images into heatmaps with no explicit graphical modeling. We encode external knowledge with visual features which are able to characterize the constraints of human body models and evaluate the fitness of intermediate network output. We then inject these external features into the neural network using a projection matrix learned using an auxiliary cost function. The effectiveness of the proposed inception-resnet module and the benefit in guided learning with knowledge projection is evaluated on two widely used benchmarks. Our approach achieves state-of-the-art performance on both datasets.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Pose Estimation MPII Human Pose Stacked hourglass + Inception-resnet PCKh-0.5 91.2 # 19

Results from Other Papers


Task Dataset Model Metric Name Metric Value Rank Source Paper Compare
Pose Estimation Leeds Sports Poses Stacked hourglass + Inception-resnet PCK 93.9% # 6

Methods


No methods listed for this paper. Add relevant methods here