no code implementations • 17 Feb 2023 • Shengqin Wang, Yongji Zhang, Hong Qi, Minghao Zhao, Yu Jiang
With multiple spatial static hypergraphs and dynamic TPH, our network can learn more complete spatial-temporal features.
1 code implementation • 19 Oct 2022 • Xueru Wen, Changjiang Zhou, Haotian Tang, Luguang Liang, Yu Jiang, Hong Qi
Named entity recognition is a traditional task in natural language processing.
1 code implementation • 19 Oct 2022 • Xueru Wen, Changjiang Zhou, Haotian Tang, Luguang Liang, Yu Jiang, Hong Qi
Named entity recognition is a fundamental task in natural language processing, identifying the span and category of entities in unstructured texts.
1 code implementation • 31 May 2022 • Shengqin Wang, Yongji Zhang, Minghao Zhao, Hong Qi, Kai Wang, Fenglin Wei, Yu Jiang
Skeleton-based action recognition methods are limited by the semantic extraction of spatio-temporal skeletal maps.
Ranked #7 on Skeleton Based Action Recognition on N-UCLA
no code implementations • 29 Apr 2021 • Zhiyuan Wu, Yu Jiang, Minghao Zhao, Chupeng Cui, Zongmin Yang, Xinhui Xue, Hong Qi
To further improve the robustness of the student, we extend SD to Enhanced Spirit Distillation (ESD) in exploiting a more comprehensive knowledge by introducing the proximity domain which is similar to the target domain for feature extraction.
no code implementations • 25 Mar 2021 • Zhiyuan Wu, Yu Jiang, Chupeng Cui, Zongmin Yang, Xinhui Xue, Hong Qi
Inspired by the ideas of Fine-tuning-based Transfer Learning (FTT) and feature-based knowledge distillation, we propose a new knowledge distillation method for cross-domain knowledge transference and efficient data-insufficient network training, named Spirit Distillation(SD), which allow the student network to mimic the teacher network to extract general features, so that a compact and accurate student network can be trained for real-time semantic segmentation of road scenes.
no code implementations • 26 Oct 2020 • Zhiyuan Wu, Hong Qi, Yu Jiang, Minghao Zhao, Chupeng Cui, Zongmin Yang, Xinhui Xue
Model compression becomes a recent trend due to the requirement of deploying neural networks on embedded and mobile devices.