The MEDIQA 2021 shared tasks at the BioNLP 2021 workshop addressed three tasks on summarization for medical text: (i) a question summarization task aimed at exploring new approaches to understanding complex real-world consumer health queries, (ii) a multi-answer summarization task that targeted aggregation of multiple relevant answers to a biomedical question into one concise and relevant answer, and (iii) a radiology report summarization task addressing the development of clinically relevant impressions from radiology report findings.
Our experiments showed that training deep learning models on real-world medical claims greatly improves performance compared to models trained on synthetic and open-domain claims.
The growth of online consumer health questions has led to the necessity for reliable and accurate question answering systems.
In this paper, we study the task of abstractive summarization for real-world consumer health questions.
Visual Question Generation (VQG), the task of generating a question based on image contents, is an increasingly important area that combines natural language processing and computer vision.
This dataset can be used to evaluate single or multi-document summaries generated by algorithms using extractive or abstractive approaches.
MEDIQA 2019 includes three tasks: Natural Language Inference (NLI), Recognizing Question Entailment (RQE), and Question Answering (QA) in the medical domain.
One of the challenges in large-scale information retrieval (IR) is to develop fine-grained and domain-specific methods to answer natural language questions.
Tested on cQA-B-2016 test data, our RQE system outperformed the best system of the 2016 challenge in all measures with 77. 47 MAP and 80. 57 Accuracy.
Readers usually rely on abstracts to identify relevant medical information from scientific articles.