no code implementations • 24 Mar 2025 • Changho Shin, Xinya Yan, Suenggwan Jo, Sungjun Cho, Shourjo Aditya Chaudhuri, Frederic Sala
Our experiments reveal that TARDIS enhances downstream task performance without the need for fine-tuning, can mitigate temporal misalignment even when exact target time period data is unavailable, and remains efficient even when the temporal information of the target data points is unknown at inference time.
no code implementations • 19 Aug 2024 • Jaehoon Lee, Hankook Lee, Sungik Choi, Sungjun Cho, Moontae Lee
When solving forecasting problems including multiple time-series features, existing approaches often fall into two extreme categories, depending on whether to utilize inter-feature information: univariate and complete-multivariate models.
1 code implementation • 13 Aug 2024 • Sungmin Cha, Sungjun Cho, Dasol Hwang, Moontae Lee
Large Language Models (LLMs) have demonstrated strong reasoning and memorization capabilities via pretraining on massive textual corpora.
no code implementations • 29 Jul 2024 • Seungyeon Rhyu, Kichang Yang, Sungjun Cho, Jaehyeon Kim, Kyogu Lee, Moontae Lee
Music generation introduces challenging complexities to large language models.
1 code implementation • CVPR 2024 • Minhyuk Seo, Hyunseo Koh, Wonje Jeung, Minjae Lee, San Kim, Hankook Lee, Sungjun Cho, Sungik Choi, Hyunwoo Kim, Jonghyun Choi
Online continual learning suffers from an underfitted solution due to insufficient training for prompt model update (e. g., single-epoch training).
no code implementations • 8 Sep 2023 • Sungjun Cho, Seunghyuk Cho, Sungwoo Park, Hankook Lee, Honglak Lee, Moontae Lee
Real-world graphs naturally exhibit hierarchical or cyclical structures that are unfit for the typical Euclidean space.
no code implementations • 8 Sep 2023 • Sungjun Cho, Dae-Woong Jeong, Sung Moon Ko, Jinwoo Kim, Sehui Han, Seunghoon Hong, Honglak Lee, Moontae Lee
Pretraining molecular representations from large unlabeled data is essential for molecular property prediction due to the high cost of obtaining ground-truth labels.
no code implementations • 2nd Annual Topology, Algebra, and Geometry in Machine Learning Workshop 2023 • Sungjun Cho, Seunghyuk Cho, Sungwoo Park, Hankook Lee, Honglak Lee, Moontae Lee
Real-world graphs naturally exhibit hierarchical or cyclical structures that are unfit for the typical Euclidean space.
1 code implementation • 27 Jan 2023 • Sungmin Cha, Sungjun Cho, Dasol Hwang, Honglak Lee, Taesup Moon, Moontae Lee
Since the recent advent of regulations for data protection (e. g., the General Data Protection Regulation), there has been increasing demand in deleting information learned from sensitive data in pre-trained models without retraining from scratch.
1 code implementation • 27 Oct 2022 • Sungjun Cho, Seonwoo Min, Jinwoo Kim, Moontae Lee, Honglak Lee, Seunghoon Hong
The forward and backward cost are thus linear to the number of edges, which each attention head can also choose flexibly based on the input.
no code implementations • 26 Sep 2022 • Hyunjae Lee, Gihyeon Lee, Junhwan Kim, Sungjun Cho, Dohyun Kim, Donggeun Yoo
However, it often results in selecting a sub-optimal configuration as training with the high-performing configuration typically converges slowly in an early phase.
no code implementations • 7 Sep 2022 • Sung Moon Ko, Sungjun Cho, Dae-Woong Jeong, Sehui Han, Moontae Lee, Honglak Lee
Conventional methods ask users to specify an appropriate number of clusters as a hyperparameter, then assume that all input graphs share the same number of clusters.
1 code implementation • 22 Aug 2022 • Jinwoo Kim, Saeyoon Oh, Sungjun Cho, Seunghoon Hong
Many problems in computer vision and machine learning can be cast as learning on hypergraphs that represent higher-order relations.
2 code implementations • 6 Jul 2022 • Jinwoo Kim, Tien Dat Nguyen, Seonwoo Min, Sungjun Cho, Moontae Lee, Honglak Lee, Seunghoon Hong
We show that standard Transformers without graph-specific modifications can lead to promising results in graph learning both in theory and practice.
Ranked #5 on
Graph Regression
on PGR
1 code implementation • CVPR 2023 • Sungmin Cha, Sungjun Cho, Dasol Hwang, Sunwon Hong, Moontae Lee, Taesup Moon
The main reason for the ineffectiveness of their method lies in not fully addressing the data imbalance issue, especially in computing the gradients for learning the affine transformation parameters of BN.
no code implementations • 12 Nov 2021 • Moontae Lee, Sungjun Cho, Kun Dong, David Mimno, David Bindel
Across many data domains, co-occurrence statistics about the joint appearance of objects are powerfully informative.
no code implementations • IJCNLP 2019 • Moontae Lee, Sungjun Cho, David Bindel, David Mimno
Despite great scalability on large data and their ability to understand correlations between topics, spectral topic models have not been widely used due to the absence of reliability in real data and lack of practical implementations.