Lightweight Neural Network with Knowledge Distillation for CSI Feedback

31 Oct 2022  ·  Yiming Cui, Jiajia Guo, Zheng Cao, Huaze Tang, Chao-Kai Wen, Shi Jin, Xin Wang, Xiaolin Hou ·

Deep learning has shown promise in enhancing channel state information (CSI) feedback. However, many studies indicate that better feedback performance often accompanies higher computational complexity. Pursuing better performance-complexity tradeoffs is crucial to facilitate practical deployment, especially on computation-limited devices, which may have to use lightweight autoencoder with unfavorable performance. To achieve this goal, this paper introduces knowledge distillation (KD) to achieve better tradeoffs, where knowledge from a complicated teacher autoencoder is transferred to a lightweight student autoencoder for performance improvement. Specifically, two methods are proposed for implementation. Firstly, an autoencoder KD-based method is introduced by training a student autoencoder to mimic the reconstructed CSI of a pretrained teacher autoencoder. Secondly, an encoder KD-based method is proposed to reduce training overhead by performing KD only on the student encoder. Additionally, a variant of encoder KD is introduced to protect user equipment and base station vendor intellectual property. Numerical simulations demonstrate that the proposed methods can significantly improve the student autoencoder's performance, while reducing the number of floating point operations and inference time to 3.05%-5.28% and 13.80%-14.76% of the teacher network, respectively. Furthermore, the variant encoder KD method effectively enhances the student autoencoder's generalization capability across different scenarios, environments, and bandwidths.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods