no code implementations • 22 Apr 2024 • Jin-Duk Park, Yong-Min Shin, Won-Yong Shin
In this paper, we propose Turbo-CF, a GF-based CF method that is both training-free and matrix decomposition-free.
no code implementations • 29 Nov 2023 • Yong-Min Shin, Won-Yong Shin
Although this can be achieved by applying the inverse propagation $\Pi^{-1}$ before distillation from the teacher, it still comes with a high computational cost from large matrix multiplications during training.
no code implementations • 20 Nov 2023 • Yong-Min Shin, Won-Yong Shin
Although this can be achieved by applying the inverse propagation $\Pi^{-1}$ before distillation from the teacher GNN, it still comes with a high computational cost from large matrix multiplications during training.
1 code implementation • 31 Oct 2022 • Yong-Min Shin, Sun-Woo Kim, Won-Yong Shin
Aside from graph neural networks (GNNs) attracting significant attention as a powerful framework revolutionizing graph representation learning, there has been an increasing demand for explaining GNN models.
1 code implementation • 28 Jan 2022 • Kyeong-Joong Jeong, Yong-Min Shin
Detecting anomalies in multivariate time-series data is essential in many real-world applications.
1 code implementation • 12 Apr 2021 • Yong-Min Shin, Cong Tran, Won-Yong Shin, Xin Cao
We study the problem of embedding edgeless nodes such as users who newly enter the underlying network, while using graph neural networks (GNNs) widely studied for effective representation learning of graphs.