VPN++: Rethinking Video-Pose embeddings for understanding Activities of Daily Living

17 May 2021  ·  Srijan Das, Rui Dai, Di Yang, Francois Bremond ·

Many attempts have been made towards combining RGB and 3D poses for the recognition of Activities of Daily Living (ADL). ADL may look very similar and often necessitate to model fine-grained details to distinguish them. Because the recent 3D ConvNets are too rigid to capture the subtle visual patterns across an action, this research direction is dominated by methods combining RGB and 3D Poses. But the cost of computing 3D poses from RGB stream is high in the absence of appropriate sensors. This limits the usage of aforementioned approaches in real-world applications requiring low latency. Then, how to best take advantage of 3D Poses for recognizing ADL? To this end, we propose an extension of a pose driven attention mechanism: Video-Pose Network (VPN), exploring two distinct directions. One is to transfer the Pose knowledge into RGB through a feature-level distillation and the other towards mimicking pose driven attention through an attention-level distillation. Finally, these two approaches are integrated into a single model, we call VPN++. We show that VPN++ is not only effective but also provides a high speed up and high resilience to noisy Poses. VPN++, with or without 3D Poses, outperforms the representative baselines on 4 public datasets. Code is available at https://github.com/srijandas07/vpnplusplus.

PDF Abstract

Results from the Paper


Ranked #9 on Action Recognition on NTU RGB+D 120 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Action Recognition NTU RGB+D 120 VPN++ (RGB + Pose) Accuracy (Cross-Subject) 92.5 # 5
Accuracy (Cross-Setup) 90.7 # 9
Skeleton Based Action Recognition N-UCLA VPN++ (RGB + Pose) Accuracy 93.5 # 13

Methods


No methods listed for this paper. Add relevant methods here