no code implementations • 10 Oct 2023 • Sung Moon Ko, Sumin Lee, Dae-Woong Jeong, Woohyung Lim, Sehui Han
Transfer learning is a crucial technique for handling a small amount of data that is potentially related to other abundant data.
no code implementations • 8 Sep 2023 • Sungjun Cho, Dae-Woong Jeong, Sung Moon Ko, Jinwoo Kim, Sehui Han, Seunghoon Hong, Honglak Lee, Moontae Lee
Pretraining molecular representations from large unlabeled data is essential for molecular property prediction due to the high cost of obtaining ground-truth labels.
no code implementations • 7 Sep 2022 • Sung Moon Ko, Sungjun Cho, Dae-Woong Jeong, Sehui Han, Moontae Lee, Honglak Lee
Conventional methods ask users to specify an appropriate number of clusters as a hyperparameter, then assume that all input graphs share the same number of clusters.
no code implementations • 1 Jan 2021 • Dae-Woong Jeong, Kiyoung Kim, ChangYoung Park, Sehui Han, Woohyung Lim
We assume the existence of enough unlabeled data that follow the true distribution, and that the true distribution can be roughly estimated from domain knowledge or a few samples.
no code implementations • 12 Feb 2019 • Dae-Woong Jeong, Jaehun Kim, Young-Seok Kim, Tae-Ho Kim, Myungsu Chae
Existing high-performance deep learning models require very intensive computing.