no code implementations • 6 Dec 2022 • Usma Niyaz, Deepti R. Bathula
Unlike conventional techniques that share the same type of knowledge with all networks, we propose to train individual networks with different forms of information to enhance the learning process.
no code implementations • 19 Aug 2022 • Ranjana Roy Chowdhury, Deepti R. Bathula
Further, the influence factor of a sample is measured using MMD based on the shift in the distribution in the absence of that sample.
no code implementations • 1 Nov 2021 • Ranjana Roy Chowdhury, Deepti R. Bathula
Conventional PN attributes equal importance to all samples and generates prototypes by simply averaging the support sample embeddings belonging to each class.
no code implementations • 21 Oct 2021 • Abhishek Singh Sambyal, Narayanan C. Krishnan, Deepti R. Bathula
The proposed method was evaluated on a benchmark medical imaging dataset with image reconstruction as the self-supervised task and segmentation as the image analysis task.
no code implementations • 21 Oct 2021 • Usma Niyaz, Deepti R. Bathula
Knowledge distillation (KD) is an effective model compression technique where a compact student network is taught to mimic the behavior of a complex and highly trained teacher network.
no code implementations • 4 Aug 2021 • Apoorva Sikka, Skand, Jitender Singh Virk, Deepti R. Bathula
Medical imaging datasets are inherently high dimensional with large variability and low sample sizes that limit the effectiveness of deep learning algorithms.
1 code implementation • 8 Jul 2021 • Subhranil Bagchi, Deepti R. Bathula
Different categories of visual stimuli activate different responses in the human brain.