Search Results for author: Jodie Avery

Found 4 papers, 0 papers with code

Learnable Cross-modal Knowledge Distillation for Multi-modal Learning with Missing Modality

no code implementations2 Oct 2023 Hu Wang, Yuanhong Chen, Congbo Ma, Jodie Avery, Louise Hull, Gustavo Carneiro

Then, cross-modal knowledge distillation is performed between teacher and student modalities for each task to push the model parameters to a point that is beneficial for all tasks.

Knowledge Distillation

Multi-modal Learning with Missing Modality via Shared-Specific Feature Modelling

no code implementations CVPR 2023 Hu Wang, Yuanhong Chen, Congbo Ma, Jodie Avery, Louise Hull, Gustavo Carneiro

This is achieved from a strategy that relies on auxiliary tasks based on distribution alignment and domain classification, in addition to a residual feature fusion procedure.

Classification domain classification +4

Distilling Missing Modality Knowledge from Ultrasound for Endometriosis Diagnosis with Magnetic Resonance Images

no code implementations5 Jul 2023 Yuan Zhang, Hu Wang, David Butler, Minh-Son To, Jodie Avery, M Louise Hull, Gustavo Carneiro

Next, we distill the knowledge from the teacher TVUS POD obliteration detector to train the student MRI model by minimizing a regression loss that approximates the output of the student to the teacher using unpaired TVUS and MRI data.

Knowledge Distillation

Cannot find the paper you are looking for? You can Submit a new open access paper.