Search Results for author: Hyungyung Lee

Found 3 papers, 3 papers with code

Vision-Language Generative Model for View-Specific Chest X-ray Generation

1 code implementation23 Feb 2023 Hyungyung Lee, Da Young Lee, Wonjae Kim, Jin-Hwa Kim, Tackeun Kim, Jihang Kim, Leonard Sunwoo, Edward Choi

Synthetic medical data generation has opened up new possibilities in the healthcare domain, offering a powerful tool for simulating clinical scenarios, enhancing diagnostic and treatment quality, gaining granular medical knowledge, and accelerating the development of unbiased algorithms.

Language Modelling Quantization

Unconditional Image-Text Pair Generation with Multimodal Cross Quantizer

1 code implementation15 Apr 2022 Hyungyung Lee, Sungjin Park, Joonseok Lee, Edward Choi

To learn a multimodal semantic correlation in a quantized space, we combine VQ-VAE with a Transformer encoder and apply an input masking strategy.

multimodal generation Quantization

Multi-modal Understanding and Generation for Medical Images and Text via Vision-Language Pre-Training

1 code implementation24 May 2021 Jong Hak Moon, Hyungyung Lee, Woncheol Shin, Young-Hak Kim, Edward Choi

Recently a number of studies demonstrated impressive performance on diverse vision-language multi-modal tasks such as image captioning and visual question answering by extending the BERT architecture with multi-modal pre-training objectives.

Image Captioning Medical Visual Question Answering +6

Cannot find the paper you are looking for? You can Submit a new open access paper.