1 code implementation • 3 May 2024 • Canhui Tang, Sanping Zhou, Yizhe Li, Yonghao Dong, Le Wang
The success of knowledge distillation mainly relies on how to keep the feature discrepancy between the teacher and student model, in which it assumes that: (1) the teacher model can jointly represent two different distributions for the normal and abnormal patterns, while (2) the student model can only reconstruct the normal distribution.
no code implementations • 9 Mar 2024 • Yonghao Dong, Le Wang, Sanping Zhou, Gang Hua, Changyin Sun
Previous studies have tried to tackle this problem by leveraging a portion of the trajectory data from the target domain to adapt the model.
no code implementations • 27 Nov 2023 • Yonghao Dong, Le Wang, Sanpin Zhou, Gang Hua, Changyin Sun
Specifically, TSNet learns the negative-removed characters in the sparse character representation stream to improve the trajectory embedding obtained in the trajectory representation stream.
no code implementations • ICCV 2023 • Yonghao Dong, Le Wang, Sanping Zhou, Gang Hua
Specifically, SICNet learns comprehensive sparse instances, i. e., representative points of the future trajectory, through a mask generated by a long short-term memory encoder and uses the memory mechanism to store and retrieve such sparse instances.