Search Results for author: Tackeun Kim

Found 3 papers, 3 papers with code

EHRXQA: A Multi-Modal Question Answering Dataset for Electronic Health Records with Chest X-ray Images

2 code implementations NeurIPS 2023 Seongsu Bae, Daeun Kyung, Jaehee Ryu, Eunbyeol Cho, Gyubok Lee, Sunjun Kweon, JungWoo Oh, Lei Ji, Eric I-Chao Chang, Tackeun Kim, Edward Choi

To develop our dataset, we first construct two uni-modal resources: 1) The MIMIC-CXR-VQA dataset, our newly created medical visual question answering (VQA) benchmark, specifically designed to augment the imaging modality in EHR QA, and 2) EHRSQL (MIMIC-IV), a refashioned version of a previously established table-based EHR QA dataset.

Decision Making Medical Visual Question Answering +2

Vision-Language Generative Model for View-Specific Chest X-ray Generation

1 code implementation23 Feb 2023 Hyungyung Lee, Da Young Lee, Wonjae Kim, Jin-Hwa Kim, Tackeun Kim, Jihang Kim, Leonard Sunwoo, Edward Choi

Synthetic medical data generation has opened up new possibilities in the healthcare domain, offering a powerful tool for simulating clinical scenarios, enhancing diagnostic and treatment quality, gaining granular medical knowledge, and accelerating the development of unbiased algorithms.

Language Modelling Quantization

Graph-Text Multi-Modal Pre-training for Medical Representation Learning

1 code implementation18 Mar 2022 Sungjin Park, Seongsu Bae, Jiho Kim, Tackeun Kim, Edward Choi

MedGTX uses a novel graph encoder to exploit the graphical nature of structured EHR data, and a text encoder to handle unstructured text, and a cross-modal encoder to learn a joint representation space.

Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.