Saama Research at MEDIQA 2019: Pre-trained BioBERT with Attention Visualisation for Medical Natural Language Inference

WS 2019  ·  Kamal raj Kanakarajan, Suriyadeepan Ramamoorthy, Vaidheeswaran Archana, Soham Chatterjee, Malaikannan Sankarasubbu ·

Natural Language inference is the task of identifying relation between two sentences as entailment, contradiction or neutrality. MedNLI is a biomedical flavour of NLI for clinical domain... This paper explores the use of Bidirectional Encoder Representation from Transformer (BERT) for solving MedNLI. The proposed model, BERT pre-trained on PMC, PubMed and fine-tuned on MIMICIII v1.4, achieves state of the art results on MedNLI (83.45{\%}) and an accuracy of 78.5{\%} in MEDIQA challenge. The authors present an analysis of the attention patterns that emerged as a result of training BERT on MedNLI using a visualization tool, bertviz. read more

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Natural Language Inference MedNLI BioBERT-MIMIC Accuracy 83.45 # 3