Artificial Intelligence (AI), along with the recent progress in biomedical language understanding, is gradually changing medical practice. With the development of biomedical language understanding benchmarks, AI applications are widely used in the medical field. However, most benchmarks are limited to English, which makes it challenging to replicate many of the successes in English for other languages. To facilitate research in this direction, we collect real-world biomedical data and present the first Chinese Biomedical Language Understanding Evaluation (CBLUE) benchmark: a collection of natural language understanding tasks including named entity recognition, information extraction, clinical diagnosis normalization, single-sentence/sentence-pair classification, and an associated online platform for model evaluation, comparison, and analysis. To establish evaluation on these tasks, we report empirical results with the current 11 pre-trained Chinese models, and experimental results show that state-of-the-art neural models perform by far worse than the human ceiling. Our benchmark is released at \url{https://tianchi.aliyun.com/dataset/dataDetail?dataId=95414&lang=en-us}.

PDF Abstract ACL 2022 PDF ACL 2022 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Medical Concept Normalization CHIP-CDN MacBERT-large Micro F1 59.3 # 1
Sentence Classification CHIP-CTC RoBERTa-large Macro F1 70.9 # 1
Semantic Similarity CHIP-STS MacBERT-large Macro F1 85.6 # 1
Named Entity Recognition (NER) CMeEE MacBERT-large Micro F1 62.4 # 1
Medical Relation Extraction CMeIE RoBERTa-wwm-ext-large Micro F1 55.9 # 1
Intent Classification KUAKE-QIC RoBERTa-wwm-ext-base Accuracy 85.5 # 1
Natural Language Inference KUAKE-QQR BERT-base Accuracy 84.7 # 1
Natural Language Inference KUAKE-QTR MacBERT-large Accuracy 62.9 # 1

Methods