Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets

WS 2019  ·  Yifan Peng, Shankai Yan, Zhiyong Lu ·

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Named Entity Recognition BC5CDR-chemical NCBI_BERT(base) (P) F1 93.5 # 5
Named Entity Recognition BC5CDR-disease NCBI_BERT(base) (P) F1 86.6 # 4
Semantic Similarity BIOSSES NCBI_BERT(base) (P+M) Pearson Correlation 0.9159999999999999 # 3
Relation Extraction ChemProt NCBI_BERT(large) (P) F1 74.4 # 8
Medical Relation Extraction DDI extraction 2013 corpus NCBI_BERT(large) (P) F1 79.9 # 2
Document Classification HOC NCBI_BERT(large) (P) F1 87.3 # 2
Natural Language Inference MedNLI NCBI_BERT(base) (P+M) F1 84 # 1
Semantic Similarity MedSTS NCBI_BERT(base) (P+M) Pearson Correlation 0.848 # 1
Medical Named Entity Recognition ShARe/CLEF eHealth corpus NCBI_BERT(base) (P+M) F1 0.792 # 2

Methods


No methods listed for this paper. Add relevant methods here