Search Results for author: Yongjian Fu

Found 6 papers, 1 papers with code

When Video Classification Meets Incremental Classes

no code implementations30 Jun 2021 Hanbin Zhao, Xin Qin, Shihao Su, Yongjian Fu, Zibo Lin, Xi Li

With the rapid development of social media, tremendous videos with new classes are generated daily, which raise an urgent demand for video classification methods that can continuously update new classes while maintaining the knowledge of old videos with limited storage and computing resources.

Classification class-incremental learning +3

Memory Efficient Class-Incremental Learning for Image Classification

no code implementations4 Aug 2020 Hanbin Zhao, Hui Wang, Yongjian Fu, Fei Wu, Xi Li

To cope with the forgetting problem, many CIL methods transfer the knowledge of old classes by preserving some exemplar samples into the size-constrained memory buffer.

Classification class-incremental learning +4

What and Where: Learn to Plug Adapters via NAS for Multi-Domain Learning

no code implementations24 Jul 2020 Hanbin Zhao, Hao Zeng, Xin Qin, Yongjian Fu, Hui Wang, Bourahla Omar, Xi Li

As an important and challenging problem, multi-domain learning (MDL) typically seeks for a set of effective lightweight domain-specific adapter modules plugged into a common domain-agnostic network.

Neural Architecture Search

MgSvF: Multi-Grained Slow vs. Fast Framework for Few-Shot Class-Incremental Learning

no code implementations28 Jun 2020 Hanbin Zhao, Yongjian Fu, Mintong Kang, Qi Tian, Fei Wu, Xi Li

As a challenging problem, few-shot class-incremental learning (FSCIL) continually learns a sequence of tasks, confronting the dilemma between slow forgetting of old knowledge and fast adaptation to new knowledge.

class-incremental learning Incremental Learning

Semantic Neighborhood-Aware Deep Facial Expression Recognition

no code implementations27 Apr 2020 Yongjian Fu, Xintian Wu, Xi Li, Zhijie Pan, Daxin Luo

Different from many other attributes, facial expression can change in a continuous way, and therefore, a slight semantic change of input should also lead to the output fluctuation limited in a small scale.

Facial Expression Recognition

DAPnet: A Double Self-attention Convolutional Network for Point Cloud Semantic Labeling

1 code implementation18 Apr 2020 Li Chen, Zewei Xu, Yongjian Fu, Haozhe Huang, Shaowen Wang, Haifeng Li

The incorporation of the double self-attention module has an average of 7\% improvement on the pre-class accuracy.

Cannot find the paper you are looking for? You can Submit a new open access paper.