However, as deep learning models require a large amount of training data, applying deep learning to biomedical text mining is often unsuccessful due to the lack of training data in biomedical fields.
Obtaining large-scale annotated data for NLP tasks in the scientific domain is challenging and expensive.
SOTA for Named Entity Recognition on NCBI-disease (using extra training data)
Deep neural network models have recently achieved state-of-the-art performance gains in a variety of natural language processing (NLP) tasks (Young, Hazarika, Poria, & Cambria, 2017).
We also investigate the effects of a small amount of additional pretraining on PubMed content, and of combining FLAIR and ELMO models.
SOTA for Named Entity Recognition on BC5CDR (using extra training data)
Functioning is gaining recognition as an important indicator of global health, but remains under-studied in medical natural language processing research.