Search Results for author: Tackeun Kim

Found 3 papers, 3 papers with code

EHRXQA: A Multi-Modal Question Answering Dataset for Electronic Health Records with Chest X-ray Images

2 code implementations NeurIPS 2023 Seongsu Bae, Daeun Kyung, Jaehee Ryu, Eunbyeol Cho, Gyubok Lee, Sunjun Kweon, JungWoo Oh, Lei Ji, Eric I-Chao Chang, Tackeun Kim, Edward Choi

To develop our dataset, we first construct two uni-modal resources: 1) The MIMIC-CXR-VQA dataset, our newly created medical visual question answering (VQA) benchmark, specifically designed to augment the imaging modality in EHR QA, and 2) EHRSQL (MIMIC-IV), a refashioned version of a previously established table-based EHR QA dataset.

Decision Making Medical Visual Question Answering +2

UniXGen: A Unified Vision-Language Model for Multi-View Chest X-ray Generation and Report Generation

1 code implementation23 Feb 2023 Hyungyung Lee, Da Young Lee, Wonjae Kim, Jin-Hwa Kim, Tackeun Kim, Jihang Kim, Leonard Sunwoo, Edward Choi

We also find that view-specific special tokens can distinguish between different views and properly generate specific views even if they do not exist in the dataset, and utilizing multi-view chest X-rays can faithfully capture the abnormal findings in the additional X-rays.

Language Modelling Quantization

Graph-Text Multi-Modal Pre-training for Medical Representation Learning

1 code implementation18 Mar 2022 Sungjin Park, Seongsu Bae, Jiho Kim, Tackeun Kim, Edward Choi

MedGTX uses a novel graph encoder to exploit the graphical nature of structured EHR data, and a text encoder to handle unstructured text, and a cross-modal encoder to learn a joint representation space.

Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.