Search Results for author: Jong Hak Moon

Found 3 papers, 3 papers with code

Publicly Shareable Clinical Large Language Model Built on Synthetic Clinical Notes

1 code implementation1 Sep 2023 Sunjun Kweon, Junu Kim, Jiyoun Kim, Sujeong Im, Eunbyeol Cho, Seongsu Bae, JungWoo Oh, Gyubok Lee, Jong Hak Moon, Seng Chan You, Seungjin Baek, Chang Hoon Han, Yoon Bin Jung, Yohan Jo, Edward Choi

The development of large language models tailored for handling patients' clinical notes is often hindered by the limited accessibility and usability of these notes due to strict privacy regulations.

Language Modelling Large Language Model

Correlation between Alignment-Uniformity and Performance of Dense Contrastive Representations

1 code implementation17 Oct 2022 Jong Hak Moon, Wonjae Kim, Edward Choi

Recently, dense contrastive learning has shown superior performance on dense prediction tasks compared to instance-level contrastive learning.

Contrastive Learning

Multi-modal Understanding and Generation for Medical Images and Text via Vision-Language Pre-Training

1 code implementation24 May 2021 Jong Hak Moon, Hyungyung Lee, Woncheol Shin, Young-Hak Kim, Edward Choi

Recently a number of studies demonstrated impressive performance on diverse vision-language multi-modal tasks such as image captioning and visual question answering by extending the BERT architecture with multi-modal pre-training objectives.

Image Captioning Medical Visual Question Answering +6

Cannot find the paper you are looking for? You can Submit a new open access paper.