1 code implementation • 1 Jun 2023 • Shengqin Jiang, Yaoyu Fang, Haokui Zhang, Qingshan Liu, Yuankai Qi, Yang Yang, Peng Wang
Rehearsal-based video incremental learning often employs knowledge distillation to mitigate catastrophic forgetting of previously learned data.
no code implementations • 18 Mar 2023 • Shengqin Jiang, Bowen Li, Fengna Cheng, Qingshan Liu
Moreover, we propose a feature relation distillation method which allows the student branch to more effectively comprehend the evolution of inter-layer features by constructing a new inter-layer relationship matrix.
1 code implementation • 29 Dec 2022 • Shengqin Jiang, Qing Wang, Fengna Cheng, Yuankai Qi, Qingshan Liu
In this paper, we build the first evolving object counting dataset and propose a unified object counting network as the first attempt to address this task.
no code implementations • 18 Dec 2018 • Shengqin Jiang, Xiaobo Lu, Yinjie Lei, Lingqiao Liu
Our rationale is that the mask prediction could be better modeled as a binary segmentation problem and the difficulty of estimating the density could be reduced if the mask is known.