MedTrinity-25M: A Large-scale Multimodal Dataset with Multigranular Annotations for Medicine

6 Aug 2024  ยท  Yunfei Xie, Ce Zhou, Lang Gao, Juncheng Wu, Xianhang Li, Hong-Yu Zhou, Sheng Liu, Lei Xing, James Zou, Cihang Xie, Yuyin Zhou ยท

This paper introduces MedTrinity-25M, a comprehensive, large-scale multimodal dataset for medicine, covering over 25 million images across 10 modalities, with multigranular annotations for more than 65 diseases. These enriched annotations encompass both global textual information, such as disease/lesion type, modality, region-specific descriptions, and inter-regional relationships, as well as detailed local annotations for regions of interest (ROIs), including bounding boxes, segmentation masks. Unlike existing approach which is limited by the availability of image-text pairs, we have developed the first automated pipeline that scales up multimodal data by generating multigranular visual and texual annotations (in the form of image-ROI-description triplets) without the need for any paired text descriptions. Specifically, data from over 90 different sources have been collected, preprocessed, and grounded using domain-specific expert models to identify ROIs related to abnormal regions. We then build a comprehensive knowledge base and prompt multimodal large language models to perform retrieval-augmented generation with the identified ROIs as guidance, resulting in multigranular texual descriptions. Compared to existing datasets, MedTrinity-25M provides the most enriched annotations, supporting a comprehensive range of multimodal tasks such as captioning and report generation, as well as vision-centric tasks like classification and segmentation. Pretraining on MedTrinity-25M, our model achieves state-of-the-art performance on VQA-RAD and PathVQA, surpassing both multimodal large language models and other representative SoTA approaches. This dataset can also be utilized to support large-scale pre-training of multimodal medical AI models, contributing to the development of future foundation models in the medical domain.

PDF Abstract

Datasets


Introduced in the Paper:

MedTrinity-25M

Used in the Paper:

VQA-RAD PathVQA SLAKE SLAKE-English

Results from the Paper


 Ranked #1 on Medical Visual Question Answering on SLAKE-English (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Medical Visual Question Answering PathVQA LLaVA-Med++ Free-form Accuracy 66.5 # 1
Yes/No Accuracy 99.0 # 1
Overall Accuracy 82.75 # 1
Medical Visual Question Answering SLAKE-English LLaVA-Med++ Overall Accuracy 87.8 # 1
Close-ended Accuracy 89.3 # 4
Open-ended Accuracy 86.2 # 1
Medical Visual Question Answering VQA-RAD LLaVa-Med++ Close-ended Accuracy 86.0 # 3
Open-ended Accuracy 77.1 # 1
Overall Accuracy 81.5 # 3

Methods