Medical Named Entity Recognition
13 papers with code • 2 benchmarks • 6 datasets
Most implemented papers
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
Biomedical text mining is becoming increasingly important as the number of biomedical documents rapidly grows.
SciBERT: A Pretrained Language Model for Scientific Text
Obtaining large-scale annotated data for NLP tasks in the scientific domain is challenging and expensive.
Few-shot Learning for Named Entity Recognition in Medical Text
Deep neural network models have recently achieved state-of-the-art performance gains in a variety of natural language processing (NLP) tasks (Young, Hazarika, Poria, & Cambria, 2017).
Med7: a transferable clinical natural language processing model for electronic health records
In this work we introduced a named-entity recognition model for clinical natural language processing.
Embedding Transfer for Low-Resource Medical Named Entity Recognition: A Case Study on Patient Mobility
Functioning is gaining recognition as an important indicator of global health, but remains under-studied in medical natural language processing research.
A Neural Multi-Task Learning Framework to Jointly Model Medical Named Entity Recognition and Normalization
State-of-the-art studies have demonstrated the superiority of joint modelling over pipeline implementation for medical named entity recognition and normalization due to the mutual benefits between the two processes.
BioFLAIR: Pretrained Pooled Contextualized Embeddings for Biomedical Sequence Labeling Tasks
We also investigate the effects of a small amount of additional pretraining on PubMed content, and of combining FLAIR and ELMO models.
Biomedical Named Entity Recognition at Scale
Named entity recognition (NER) is a widely applicable natural language processing task and building block of question answering, topic modeling, information retrieval, etc.
ELECTRAMed: a new pre-trained language representation model for biomedical NLP
The overwhelming amount of biomedical scientific texts calls for the development of effective language models able to tackle a wide range of biomedical natural language processing (NLP) tasks.