MedQA
25 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in MedQA
Most implemented papers
QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering
The problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs) presents two challenges: given a QA context (question and answer choice), methods need to (i) identify relevant knowledge from large KGs, and (ii) perform joint reasoning over the QA context and KG.
What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams
Open domain question answering (OpenQA) tasks have been recently attracting more and more attention from the natural language processing (NLP) community.
Variational Open-Domain Question Answering
Retrieval-augmented models have proven to be effective in natural language processing tasks, yet there remains a lack of research on their optimization using variational inference.
Clinical Camel: An Open Expert-Level Medical Language Model with Dialogue-Based Knowledge Encoding
We present Clinical Camel, an open large language model (LLM) explicitly tailored for clinical research.
Can Generalist Foundation Models Outcompete Special-Purpose Tuning? Case Study in Medicine
We find that prompting innovation can unlock deeper specialist capabilities and show that GPT-4 easily tops prior leading results for medical benchmarks.
MediQ: Question-Asking LLMs and a Benchmark for Reliable Interactive Clinical Reasoning
In this paper, we propose to change the static paradigm to an interactive one, develop systems that proactively ask questions to gather more information and respond reliably, and introduce an benchmark - MediQ - to evaluate question-asking ability in LLMs.
Kformer: Knowledge Injection in Transformer Feed-Forward Layers
In this work, we propose a simple model, Kformer, which takes advantage of the knowledge stored in PTMs and external knowledge via knowledge injection in Transformer FFN layers.
GreaseLM: Graph REASoning Enhanced Language Models for Question Answering
Answering complex questions about textual narratives requires reasoning over both stated context and the world knowledge that underlies it.
Can large language models reason about medical questions?
Although large language models (LLMs) often produce impressive outputs, it remains unclear how they perform in real-world scenarios requiring strong reasoning skills and expert domain knowledge.
Relation-Aware Language-Graph Transformer for Question Answering
To address these issues, we propose Question Answering Transformer (QAT), which is designed to jointly reason over language and graphs with respect to entity relations in a unified manner.