no code implementations • 14 Feb 2024 • Yeongjae Cho, Taehee Kim, Heejun Shin, Sungzoon Cho, Dongmyung Shin
The model is developed using a step-by-step approach, starting with being pretrained on natural images and texts, followed by being trained using longitudinal chest X-ray data.
no code implementations • 12 Jan 2024 • Taehee Kim, Yeongjae Cho, Heejun Shin, Yohan Jo, Dongmyung Shin
Visual question answering (VQA) is a task where an image is given, and a series of questions are asked about the image.
no code implementations • 4 Dec 2023 • Heejun Shin, Taehee Kim, Jongho Lee, Se Young Chun, Seungryung Cho, Dongmyung Shin
In the FACT method, we meta-trained a neural network and a hash-encoder using a few scans (= 15), and a new regularization technique is utilized to reconstruct the details of an anatomical structure.
no code implementations • 1 May 2023 • Taehee Kim, Hyuk-Yoon Kwon
In this paper, we first propose a method for defining anomalies considering not only individual energy sources but also correlations between them.
no code implementations • 21 Sep 2022 • Jihyeon Lee, Taehee Kim, Yunwon Tae, Cheonbok Park, Jaegul Choo
Incorporating personal preference is crucial in advanced machine translation tasks.
1 code implementation • COLING 2022 • Taehee Kim, ChaeHun Park, Jimin Hong, Radhika Dua, Edward Choi, Jaegul Choo
To analyze this, we first train a classifier that identifies machine-written sentences, and observe that the linguistic features of the sentences identified as written by a machine are significantly different from those of human-written sentences.
1 code implementation • EMNLP 2021 • Jimin Hong, Taehee Kim, Hyesu Lim, Jaegul Choo
During the fine-tuning phase of transfer learning, the pretrained vocabulary remains unchanged, while model parameters are updated.
no code implementations • ACL 2021 • Cheonbok Park, Yunwon Tae, Taehee Kim, Soyoung Yang, Mohammad Azam Khan, Eunjeong Park, Jaegul Choo
To address this issue, this paper presents a novel meta-learning algorithm for unsupervised neural machine translation (UNMT) that trains the model to adapt to another domain by utilizing only a small amount of training data.