MeDAL: Medical Abbreviation Disambiguation Dataset for Natural Language Understanding Pretraining

One of the biggest challenges that prohibit the use of many current NLP methods in clinical settings is the availability of public datasets. In this work, we present MeDAL, a large medical text dataset curated for abbreviation disambiguation, designed for natural language understanding pre-training in the medical domain. We pre-trained several models of common architectures on this dataset and empirically showed that such pre-training leads to improved performance and convergence speed when fine-tuning on downstream medical tasks.

PDF Abstract EMNLP (ClinicalNLP) 2020 PDF EMNLP (ClinicalNLP) 2020 Abstract

Datasets


Introduced in the Paper:

MeDAL

Used in the Paper:

Pubmed MIMIC-III ADAM

Results from the Paper


 Ranked #1 on Mortality Prediction on MIMIC-III (Accuracy metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Mortality Prediction MIMIC-III ELECTRA (from scratch) Accuracy 0.8325 # 2
Mortality Prediction MIMIC-III LSTM+SA (from scratch) Accuracy 0.7996 # 5
Mortality Prediction MIMIC-III ELECTRA (pretrained) Accuracy 0.8443 # 1
Mortality Prediction MIMIC-III LSTM+SA (pretrained) Accuracy 0.8298 # 3
Mortality Prediction MIMIC-III LSTM (pretrained) Accuracy 0.828 # 4

Methods