Search Results for author: Yunsoo Kim

Found 4 papers, 1 papers with code

Enhancing Human-Computer Interaction in Chest X-ray Analysis using Vision and Language Model with Eye Gaze Patterns

no code implementations3 Apr 2024 Yunsoo Kim, Jinge Wu, Yusuf Abdulle, Yue Gao, Honghan Wu

This work proposes a novel approach to enhance human-computer interaction in chest X-ray analysis using Vision-Language Models (VLMs) enhanced with radiologists' attention by incorporating eye gaze data alongside textual prompts.

Language Modelling Question Answering +1

Hallucination Benchmark in Medical Visual Question Answering

1 code implementation11 Jan 2024 Jinge Wu, Yunsoo Kim, Honghan Wu

The recent success of large language and vision models (LLVMs) on vision question answering (VQA), particularly their applications in medicine (Med-VQA), has shown a great potential of realizing effective visual assistants for healthcare.

Hallucination Medical Visual Question Answering +2

Exploring Multimodal Large Language Models for Radiology Report Error-checking

no code implementations20 Dec 2023 Jinge Wu, Yunsoo Kim, Eva C. Keller, Jamie Chow, Adam P. Levine, Nikolas Pontikos, Zina Ibrahim, Paul Taylor, Michelle C. Williams, Honghan Wu

This paper proposes one of the first clinical applications of multimodal large language models (LLMs) as an assistant for radiologists to check errors in their reports.

Gesture Recognition with a Skeleton-Based Keyframe Selection Module

no code implementations3 Dec 2021 Yunsoo Kim, Hyun Myung

The BCCN consists of two pathways: (i) a keyframe pathway and (ii) a temporal-attention pathway.

Gesture Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.