MedQA

25 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering

michiyasunaga/qagnn NAACL 2021

The problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs) presents two challenges: given a QA context (question and answer choice), methods need to (i) identify relevant knowledge from large KGs, and (ii) perform joint reasoning over the QA context and KG.

What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams

jind11/MedQA 28 Sep 2020

Open domain question answering (OpenQA) tasks have been recently attracting more and more attention from the natural language processing (NLP) community.

Variational Open-Domain Question Answering

VodLM/vod 23 Sep 2022

Retrieval-augmented models have proven to be effective in natural language processing tasks, yet there remains a lack of research on their optimization using variational inference.

Clinical Camel: An Open Expert-Level Medical Language Model with Dialogue-Based Knowledge Encoding

bowang-lab/clinical-camel 19 May 2023

We present Clinical Camel, an open large language model (LLM) explicitly tailored for clinical research.

Can Generalist Foundation Models Outcompete Special-Purpose Tuning? Case Study in Medicine

microsoft/promptbase 28 Nov 2023

We find that prompting innovation can unlock deeper specialist capabilities and show that GPT-4 easily tops prior leading results for medical benchmarks.

MediQ: Question-Asking LLMs and a Benchmark for Reliable Interactive Clinical Reasoning

stellalisy/mediq 3 Jun 2024

In this paper, we propose to change the static paradigm to an interactive one, develop systems that proactively ask questions to gather more information and respond reliably, and introduce an benchmark - MediQ - to evaluate question-asking ability in LLMs.

Kformer: Knowledge Injection in Transformer Feed-Forward Layers

zjunlp/Kformer 15 Jan 2022

In this work, we propose a simple model, Kformer, which takes advantage of the knowledge stored in PTMs and external knowledge via knowledge injection in Transformer FFN layers.

GreaseLM: Graph REASoning Enhanced Language Models for Question Answering

snap-stanford/greaselm 21 Jan 2022

Answering complex questions about textual narratives requires reasoning over both stated context and the world knowledge that underlies it.

Can large language models reason about medical questions?

vlievin/medical-reasoning 17 Jul 2022

Although large language models (LLMs) often produce impressive outputs, it remains unclear how they perform in real-world scenarios requiring strong reasoning skills and expert domain knowledge.

Relation-Aware Language-Graph Transformer for Question Answering

mlvlab/qat 2 Dec 2022

To address these issues, we propose Question Answering Transformer (QAT), which is designed to jointly reason over language and graphs with respect to entity relations in a unified manner.