no code implementations • 3 Apr 2024 • Boje Deforce, Meng-Chieh Lee, Bart Baesens, Estefanía Serral Asensio, Jaemin Yoo, Leman Akoglu
A two-fold challenge for TSAD is a versatile and unsupervised model that can detect various different types of time series anomalies (spikes, discontinuities, trend shifts, etc.)
1 code implementation • 31 Mar 2024 • Sunwoo Kim, Shinhwan Kang, Fanchen Bu, Soo Yong Lee, Jaemin Yoo, Kijung Shin
Based on the generative SSL task, we propose a hypergraph SSL method, HypeBoy.
no code implementations • 7 Feb 2024 • Soo Yong Lee, Sunwoo Kim, Fanchen Bu, Jaemin Yoo, Jiliang Tang, Kijung Shin
Second, how does A-X dependence affect GNNs?
no code implementations • 28 Aug 2023 • Leman Akoglu, Jaemin Yoo
Self-supervised learning (SSL) is a growing torrent that has recently transformed machine learning and its many real world applications, by learning on massive amounts of unlabeled data via self-generated supervisory signals.
1 code implementation • 13 Jul 2023 • Jaemin Yoo, Yue Zhao, Lingxiao Zhao, Leman Akoglu
DSV captures the alignment between an augmentation function and the anomaly-generating mechanism with surrogate losses, which approximate the discordance and separability of test data, respectively.
no code implementations • 21 Jun 2023 • Jaemin Yoo, Lingxiao Zhao, Leman Akoglu
The first is a new unsupervised validation loss that quantifies the alignment between the augmented training data and the (unlabeled) test data.
1 code implementation • 5 Jun 2023 • Minyoung Choe, Sunwoo Kim, Jaemin Yoo, Kijung Shin
Interestingly, many real-world systems modeled as hypergraphs contain edge-dependent node labels, i. e., node labels that vary depending on hyperedges.
1 code implementation • 4 Jun 2023 • Soo Yong Lee, Fanchen Bu, Jaemin Yoo, Kijung Shin
AERO-GNN provably mitigates the proposed problems of deep graph attention, which is further empirically demonstrated with (a) its adaptive and less smooth attention functions and (b) higher performance at deep layers (up to 64).
1 code implementation • 31 Dec 2022 • Meng-Chieh Lee, Shubhranshu Shekhar, Jaemin Yoo, Christos Faloutsos
Given a large graph with few node labels, how can we (a) identify whether there is generalized network-effects (GNE) or not, (b) estimate GNE to explain the interrelations among node classes, and (c) exploit GNE efficiently to improve the performance on downstream tasks?
1 code implementation • 8 Oct 2022 • Jaemin Yoo, Meng-Chieh Lee, Shubhranshu Shekhar, Christos Faloutsos
Graph neural networks (GNNs) have succeeded in many graph mining tasks, but their generalizability to various graph scenarios is limited due to the difficulty of training, hyperparameter tuning, and the selection of a model itself.
1 code implementation • 16 Aug 2022 • Jaemin Yoo, Tiancheng Zhao, Leman Akoglu
Self-supervised learning (SSL) has emerged as a promising alternative to create supervisory signals to real-world problems, avoiding the extensive cost of manual labeling.
1 code implementation • 9 Jun 2022 • Jaemin Yoo, Hyunsik Jeon, Jinhong Jung, U Kang
Given a graph with partial observations of node features, how can we estimate the missing features accurately?
1 code implementation • 22 Feb 2022 • Jaemin Yoo, Lee Sael
How can we effectively find the best structures in tree models?
1 code implementation • 21 Feb 2022 • Jaemin Yoo, Sooyeon Shim, U Kang
Then, we propose NodeSam (Node Split and Merge) and SubMix (Subgraph Mix), two model-agnostic approaches for graph augmentation that satisfy all desired properties with different motivations.
no code implementations • 1 Jan 2021 • Jaemin Yoo, Lee Sael
In this work, we propose Decision Transformer Network (DTN), our highly accurate and interpretable tree model based on our generalized framework of tree models, decision transformers.
no code implementations • 28 Dec 2020 • Jinhong Jung, Jaemin Yoo, U Kang
In this paper, we propose Signed Graph Diffusion Network (SGDNet), a novel graph neural network that achieves end-to-end node representation learning for link sign prediction in signed social graphs.
1 code implementation • NeurIPS 2019 • Jaemin Yoo, Minyong Cho, Taebum Kim, U Kang
Knowledge distillation is to transfer the knowledge of a large neural network into a smaller one and has been shown to be effective especially when the amount of training data is limited or the size of the student model is very small.
no code implementations • 7 Feb 2018 • Mauro Scanagatta, Giorgio Corani, Marco Zaffalon, Jaemin Yoo, U Kang
We present a novel anytime algorithm (k-MAX) method for this task, which scales up to thousands of variables.