Search Results for author: Shidong Wang

Found 7 papers, 2 papers with code

Part-aware Prototypical Graph Network for One-shot Skeleton-based Action Recognition

no code implementations19 Aug 2022 Tailin Chen, Desen Zhou, Jian Wang, Shidong Wang, Qian He, Chuanyang Hu, Errui Ding, Yu Guan, Xuming He

In this paper, we study the problem of one-shot skeleton-based action recognition, which poses unique challenges in learning transferable representation from base classes to novel classes, particularly for fine-grained actions.

Action Recognition Meta-Learning +1

Boosting Generative Zero-Shot Learning by Synthesizing Diverse Features with Attribute Augmentation

1 code implementation23 Dec 2021 Xiaojie Zhao, Yuming Shen, Shidong Wang, Haofeng Zhang

Most generative ZSL methods use category semantic attributes plus a Gaussian noise to generate visual features.

Zero-Shot Learning

LSTA-Net: Long short-term Spatio-Temporal Aggregation Network for Skeleton-based Action Recognition

no code implementations1 Nov 2021 Tailin Chen, Shidong Wang, Desen Zhou, Yu Guan

We devise our model into a pure factorised architecture which can alternately perform spatial feature aggregation and temporal feature aggregation.

Action Recognition Skeleton Based Action Recognition

Learning Multi-Granular Spatio-Temporal Graph Network for Skeleton-based Action Recognition

1 code implementation10 Aug 2021 Tailin Chen, Desen Zhou, Jian Wang, Shidong Wang, Yu Guan, Xuming He, Errui Ding

The task of skeleton-based action recognition remains a core challenge in human-centred scene understanding due to the multiple granularities and large variation in human motion.

Action Recognition Scene Understanding +1

Invariant Deep Compressible Covariance Pooling for Aerial Scene Categorization

no code implementations11 Nov 2020 Shidong Wang, Yi Ren, Gerard Parr, Yu Guan, Ling Shao

In this article, we propose a novel invariant deep compressible covariance pooling (IDCCP) to solve nuisance variations in aerial scene categorization.

Image Categorization

SOFA-Net: Second-Order and First-order Attention Network for Crowd Counting

no code implementations9 Aug 2020 Haoran Duan, Shidong Wang, Yu Guan

To obtain the appropriate crowd representation, in this work we proposed SOFA-Net(Second-Order and First-order Attention Network): second-order statistics were extracted to retain selectivity of the channel-wise spatial information for dense heads while first-order statistics, which can enhance the feature discrimination for the heads' areas, were used as complementary information.

Crowd Counting

Cannot find the paper you are looking for? You can Submit a new open access paper.