Search Results for author: Yunsoo Kim

Found 9 papers, 3 papers with code

SLaVA-CXR: Small Language and Vision Assistant for Chest X-ray Report Automation

1 code implementation20 Sep 2024 Jinge Wu, Yunsoo Kim, Daqian Shi, David Cliffton, Fenglin Liu, Honghan Wu

Inspired by the success of large language models (LLMs), there is growing research interest in developing LLMs in the medical domain to assist clinicians.

Integrating Knowledge Retrieval and Large Language Models for Clinical Report Correction

no code implementations21 Jun 2024 Jinge Wu, Zhaolong Wu, Ruizhe Li, Abul Hasan, Yunsoo Kim, Jason P. Y. Cheung, Teng Zhang, Honghan Wu

This study proposes an approach for error correction in radiology reports, leveraging large language models (LLMs) and retrieval-augmented generation (RAG) techniques.

RAG Retrieval

Chain-of-Though (CoT) prompting strategies for medical error detection and correction

no code implementations13 Jun 2024 Zhaolong Wu, Abul Hasan, Jinge Wu, Yunsoo Kim, Jason P. Y. Cheung, Teng Zhang, Honghan Wu

We report results for three methods of few-shot In-Context Learning (ICL) augmented with Chain-of-Thought (CoT) and reason prompts using a large language model (LLM).

In-Context Learning Language Modeling +2

MedExQA: Medical Question Answering Benchmark with Multiple Explanations

1 code implementation10 Jun 2024 Yunsoo Kim, Jinge Wu, Yusuf Abdulle, Honghan Wu

This paper introduces MedExQA, a novel benchmark in medical question-answering, to evaluate large language models' (LLMs) understanding of medical knowledge through explanations.

Question Answering

Enhancing Human-Computer Interaction in Chest X-ray Analysis using Vision and Language Model with Eye Gaze Patterns

no code implementations3 Apr 2024 Yunsoo Kim, Jinge Wu, Yusuf Abdulle, Yue Gao, Honghan Wu

This work proposes a novel approach to enhance human-computer interaction in chest X-ray analysis using Vision-Language Models (VLMs) enhanced with radiologists' attention by incorporating eye gaze data alongside textual prompts.

Language Modeling Language Modelling +2

Hallucination Benchmark in Medical Visual Question Answering

1 code implementation11 Jan 2024 Jinge Wu, Yunsoo Kim, Honghan Wu

The recent success of large language and vision models (LLVMs) on vision question answering (VQA), particularly their applications in medicine (Med-VQA), has shown a great potential of realizing effective visual assistants for healthcare.

Hallucination Medical Visual Question Answering +2

Exploring Multimodal Large Language Models for Radiology Report Error-checking

no code implementations20 Dec 2023 Jinge Wu, Yunsoo Kim, Eva C. Keller, Jamie Chow, Adam P. Levine, Nikolas Pontikos, Zina Ibrahim, Paul Taylor, Michelle C. Williams, Honghan Wu

This paper proposes one of the first clinical applications of multimodal large language models (LLMs) as an assistant for radiologists to check errors in their reports.

Diagnostic

Gesture Recognition with a Skeleton-Based Keyframe Selection Module

no code implementations3 Dec 2021 Yunsoo Kim, Hyun Myung

The BCCN consists of two pathways: (i) a keyframe pathway and (ii) a temporal-attention pathway.

Gesture Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.