no code implementations • 6 Mar 2020 • Dong Yang, Monica Mengqi Li, Hong Fu, Jicong Fan, Zhao Zhang, Howard Leung
Overall, our work unified graph embedding features to promotes systematic research on human action recognition.
no code implementations • 23 Nov 2020 • Xianjin Chao, Yanrui Bin, Wenqing Chu, Xuan Cao, Yanhao Ge, Chengjie Wang, Jilin Li, Feiyue Huang, Howard Leung
Specifically, we take both the historical motion sequences and coarse prediction as input of our cascaded refinement network to predict refined human motion and strengthen the refinement network with adversarial error augmentation.
no code implementations • 8 Jun 2021 • Manli Zhu, Qianhui Men, Edmond S. L. Ho, Howard Leung, Hubert P. H. Shum
To highlight the capacity of the deep network in modelling input features, we utilize raw joint positions instead of hand-crafted features.
no code implementations • 1 Oct 2021 • Qianhui Men, Hubert P. H. Shum, Edmond S. L. Ho, Howard Leung
Our key insights are two-fold.
1 code implementation • 18 Aug 2022 • Manli Zhu, Qianhui Men, Edmond S. L. Ho, Howard Leung, Hubert P. H. Shum
As a result, we propose a solution that explicitly takes both individual joint features and inter-joint features as input, relieving the system from the need of discovering more complicated features from small data.
no code implementations • 3 Apr 2023 • Qianhui Men, Edmond S. L. Ho, Hubert P. H. Shum, Howard Leung
Learning view-invariant representation is a key to improving feature discrimination power for skeleton-based action recognition.