no code implementations • 18 Mar 2024 • Yejia Liu, Shijin Duan, Xiaolin Xu, Shaolei Ren
To improve the accuracy of a small model, knowledge distillation is a popular method.
no code implementations • 14 Dec 2023 • Tianchen Deng, Siyang Liu, Xuan Wang, Yejia Liu, Danwei Wang, Weidong Chen
Implicit neural representation has demonstrated promising results in view synthesis for large and complex scenes.
1 code implementation • 23 Feb 2023 • Yejia Liu, Shijin Duan, Xiaolin Xu, Shaolei Ren
Fast model updates for unseen tasks on intelligent edge devices are crucial but also challenging due to the limited computational power.
1 code implementation • 16 Oct 2022 • Yejia Liu, Wang Zhu, Shaolei Ren
To provide an approximate solution to this problem in the online continual learning setting, we further propose the Global Pseudo-task Simulation (GPS), which mimics future catastrophic forgetting of the current task by permutation.
1 code implementation • 18 Mar 2022 • Shijin Duan, Yejia Liu, Shaolei Ren, Xiaolin Xu
Thanks to the tiny storage and efficient execution, hyperdimensional Computing (HDC) is emerging as a lightweight learning framework on resource-constrained hardware.
no code implementations • 26 Aug 2021 • Yejia Liu, Weiyuan Wu, Lampros Flokas, Jiannan Wang, Eugene Wu
The SQL-based training data debugging framework has proved effective to fix this kind of issue in a non-federated learning setting.
1 code implementation • 23 Feb 2018 • Oliver Schulte, Yejia Liu, Chao Li
Successful previous approaches have built a predictive model based on player features, or derived performance predictions from the observed performance of comparable players in a cohort.