Search Results for author: Usma Niyaz

Found 3 papers, 0 papers with code

Augmenting Knowledge Distillation With Peer-To-Peer Mutual Learning For Model Compression

no code implementations21 Oct 2021 Usma Niyaz, Deepti R. Bathula

Knowledge distillation (KD) is an effective model compression technique where a compact student network is taught to mimic the behavior of a complex and highly trained teacher network.

Knowledge Distillation Model Compression +3

Leveraging Different Learning Styles for Improved Knowledge Distillation in Biomedical Imaging

no code implementations6 Dec 2022 Usma Niyaz, Abhishek Singh Sambyal, Deepti R. Bathula

These experimental results demonstrate that knowledge diversification in a combined KD and ML framework outperforms conventional KD or ML techniques (with similar network configuration) that only use predictions with an average improvement of 2%.

Knowledge Distillation Model Compression

Understanding Calibration of Deep Neural Networks for Medical Image Classification

no code implementations22 Sep 2023 Abhishek Singh Sambyal, Usma Niyaz, Narayanan C. Krishnan, Deepti R. Bathula

We considered fully supervised training, which is the prevailing approach in the community, as well as rotation-based self-supervised method with and without transfer learning, across various datasets and architecture sizes.

Image Classification Medical Image Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.