Multiple Meta-model Quantifying for Medical Visual Question Answering

19 May 2021  ·  Tuong Do, Binh X. Nguyen, Erman Tjiputra, Minh Tran, Quang D. Tran, Anh Nguyen ·

Transfer learning is an important step to extract meaningful features and overcome the data limitation in the medical Visual Question Answering (VQA) task. However, most of the existing medical VQA methods rely on external data for transfer learning, while the meta-data within the dataset is not fully utilized. In this paper, we present a new multiple meta-model quantifying method that effectively learns meta-annotation and leverages meaningful features to the medical VQA task. Our proposed method is designed to increase meta-data by auto-annotation, deal with noisy labels, and output meta-models which provide robust features for medical VQA tasks. Extensively experimental results on two public medical VQA datasets show that our approach achieves superior accuracy in comparison with other state-of-the-art methods, while does not require external data to train meta-models.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Medical Visual Question Answering PathVQA MMQ Free-form Accuracy 13.4 # 5
Yes/No Accuracy 84.0 # 5
Overall Accuracy 48.8 # 5
Medical Visual Question Answering VQA-RAD MMQ Close-ended Accuracy 75.8 # 12
Open-ended Accuracy 53.7 # 9
Overall Accuracy 67.0 # 10

Methods


No methods listed for this paper. Add relevant methods here