1 code implementation • 3 Jan 2023 • Yushun Dong, Binchi Zhang, Yiling Yuan, Na Zou, Qi Wang, Jundong Li
Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i. e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i. e., the teacher GNN model).
no code implementations • 2 Mar 2017 • Yiling Yuan, Tao Yang, Hui Feng, Bo Hu, Jianqiu Zhang, Bin Wang, Qiyong Lu
We consider a D2D-enabled cellular network where user equipments (UEs) owned by rational users are incentivized to form D2D pairs using tokens.