|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Biomedical text mining is becoming increasingly important as the number of biomedical documents rapidly grows.
Obtaining large-scale annotated data for NLP tasks in the scientific domain is challenging and expensive.
SOTA for Participant Intervention Comparison Outcome Extraction on EBM-NLP (using extra training data)
CITATION INTENT CLASSIFICATION DEPENDENCY PARSING LANGUAGE MODELLING MEDICAL NAMED ENTITY RECOGNITION PARTICIPANT INTERVENTION COMPARISON OUTCOME EXTRACTION RELATION EXTRACTION SENTENCE CLASSIFICATION
DOCUMENT CLASSIFICATION DRUG–DRUG INTERACTION EXTRACTION MEDICAL NAMED ENTITY RECOGNITION MEDICAL RELATION EXTRACTION NATURAL LANGUAGE INFERENCE RELATION EXTRACTION SEMANTIC SIMILARITY TRANSFER LEARNING
Deep neural network models have recently achieved state-of-the-art performance gains in a variety of natural language processing (NLP) tasks (Young, Hazarika, Poria, & Cambria, 2017).
We also investigate the effects of a small amount of additional pretraining on PubMed content, and of combining FLAIR and ELMO models.
SOTA for Named Entity Recognition on Species-800 (using extra training data)
Functioning is gaining recognition as an important indicator of global health, but remains under-studied in medical natural language processing research.