Search Results for author: Hyungyung Lee

Found 3 papers, 3 papers with code

UniXGen: A Unified Vision-Language Model for Multi-View Chest X-ray Generation and Report Generation

1 code implementation23 Feb 2023 Hyungyung Lee, Da Young Lee, Wonjae Kim, Jin-Hwa Kim, Tackeun Kim, Jihang Kim, Leonard Sunwoo, Edward Choi

We also find that view-specific special tokens can distinguish between different views and properly generate specific views even if they do not exist in the dataset, and utilizing multi-view chest X-rays can faithfully capture the abnormal findings in the additional X-rays.

Language Modelling Quantization

Unconditional Image-Text Pair Generation with Multimodal Cross Quantizer

1 code implementation15 Apr 2022 Hyungyung Lee, Sungjin Park, Joonseok Lee, Edward Choi

To learn a multimodal semantic correlation in a quantized space, we combine VQ-VAE with a Transformer encoder and apply an input masking strategy.

multimodal generation Quantization

Multi-modal Understanding and Generation for Medical Images and Text via Vision-Language Pre-Training

1 code implementation24 May 2021 Jong Hak Moon, Hyungyung Lee, Woncheol Shin, Young-Hak Kim, Edward Choi

Recently a number of studies demonstrated impressive performance on diverse vision-language multi-modal tasks such as image captioning and visual question answering by extending the BERT architecture with multi-modal pre-training objectives.

Image Captioning Medical Visual Question Answering +6

Cannot find the paper you are looking for? You can Submit a new open access paper.