Search Results for author: Shuning Chang

Found 9 papers, 3 papers with code

Revisiting Vision Transformer from the View of Path Ensemble

no code implementations ICCV 2023 Shuning Chang, Pichao Wang, Hao Luo, Fan Wang, Mike Zheng Shou

Therefore, we propose the path pruning and EnsembleScale skills for improvement, which cut out the underperforming paths and re-weight the ensemble components, respectively, to optimize the path combination and make the short paths focus on providing high-quality representation for subsequent paths.

DOAD: Decoupled One Stage Action Detection Network

no code implementations1 Apr 2023 Shuning Chang, Pichao Wang, Fan Wang, Jiashi Feng, Mike Zheng Show

Specifically, one branch focuses on detection representation for actor detection, and the other one for action recognition.

Action Detection Action Recognition +1

Making Vision Transformers Efficient from A Token Sparsification View

1 code implementation CVPR 2023 Shuning Chang, Pichao Wang, Ming Lin, Fan Wang, David Junhao Zhang, Rong Jin, Mike Zheng Shou

In this work, we propose a novel Semantic Token ViT (STViT), for efficient global and local vision transformers, which can also be revised to serve as backbone for downstream tasks.

Efficient ViTs Instance Segmentation +4

KVT: k-NN Attention for Boosting Vision Transformers

1 code implementation28 May 2021 Pichao Wang, Xue Wang, Fan Wang, Ming Lin, Shuning Chang, Hao Li, Rong Jin

A key component in vision transformers is the fully-connected self-attention which is more powerful than CNNs in modelling long range dependencies.

Augmented Transformer with Adaptive Graph for Temporal Action Proposal Generation

no code implementations30 Mar 2021 Shuning Chang, Pichao Wang, Fan Wang, Hao Li, Jiashi Feng

Temporal action proposal generation (TAPG) is a fundamental and challenging task in video understanding, especially in temporal action detection.

Action Detection Temporal Action Proposal Generation +1

Toward Accurate Person-level Action Recognition in Videos of Crowded Scenes

no code implementations16 Oct 2020 Li Yuan, Yichen Zhou, Shuning Chang, Ziyuan Huang, Yunpeng Chen, Xuecheng Nie, Tao Wang, Jiashi Feng, Shuicheng Yan

Prior works always fail to deal with this problem in two aspects: (1) lacking utilizing information of the scenes; (2) lacking training data in the crowd and complex scenes.

Action Recognition In Videos Semantic Segmentation

Towards Accurate Human Pose Estimation in Videos of Crowded Scenes

no code implementations16 Oct 2020 Li Yuan, Shuning Chang, Xuecheng Nie, Ziyuan Huang, Yichen Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan

In this paper, we focus on improving human pose estimation in videos of crowded scenes from the perspectives of exploiting temporal context and collecting new data.

Optical Flow Estimation Pose Estimation

A Simple Baseline for Pose Tracking in Videos of Crowded Scenes

no code implementations16 Oct 2020 Li Yuan, Shuning Chang, Ziyuan Huang, Yichen Zhou, Yunpeng Chen, Xuecheng Nie, Francis E. H. Tay, Jiashi Feng, Shuicheng Yan

This paper presents our solution to ACM MM challenge: Large-scale Human-centric Video Analysis in Complex Events\cite{lin2020human}; specifically, here we focus on Track3: Crowd Pose Tracking in Complex Events.

Multi-Object Tracking Optical Flow Estimation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.