Search Results for author: Usma Niyaz

Found 2 papers, 0 papers with code

Leveraging Different Learning Styles for Improved Knowledge Distillation

no code implementations6 Dec 2022 Usma Niyaz, Deepti R. Bathula

Unlike conventional techniques that share the same type of knowledge with all networks, we propose to train individual networks with different forms of information to enhance the learning process.

Knowledge Distillation Model Compression

Augmenting Knowledge Distillation With Peer-To-Peer Mutual Learning For Model Compression

no code implementations21 Oct 2021 Usma Niyaz, Deepti R. Bathula

Knowledge distillation (KD) is an effective model compression technique where a compact student network is taught to mimic the behavior of a complex and highly trained teacher network.

Knowledge Distillation Model Compression +3

Cannot find the paper you are looking for? You can Submit a new open access paper.