no code implementations • 26 May 2024 • Fanchen Bu, Ruochen Yang, Paul Bogdan, Kijung Shin
Desirable random graph models (RGMs) should (i) be tractable so that we can compute and control graph statistics, and (ii) generate realistic structures such as high clustering (i. e., high subgraph densities).
2 code implementations • 14 May 2024 • Fanchen Bu, Hyeonsoo Jo, Soo Yong Lee, Sungsoo Ahn, Kijung Shin
Then, for various conditions commonly involved in different CO problems, we derive nontrivial objectives and derandomization to meet the targets.
1 code implementation • 31 Mar 2024 • Sunwoo Kim, Shinhwan Kang, Fanchen Bu, Soo Yong Lee, Jaemin Yoo, Kijung Shin
Based on the generative SSL task, we propose a hypergraph SSL method, HypeBoy.
no code implementations • 7 Feb 2024 • Soo Yong Lee, Sunwoo Kim, Fanchen Bu, Jaemin Yoo, Jiliang Tang, Kijung Shin
Second, how does A-X dependence affect GNNs?
1 code implementation • 1 Nov 2023 • Hyeonsoo Jo, Fanchen Bu, Kijung Shin
We add a learnable weight to each node pair, and MetaGC adaptively adjusts the weights of node pairs using meta-weighting so that the weights of meaningful node pairs increase and the weights of less-meaningful ones (e. g., noise edges) decrease.
3 code implementations • 29 Jun 2023 • Federico Berto, Chuanbo Hua, Junyoung Park, Laurin Luttmann, Yining Ma, Fanchen Bu, Jiarui Wang, Haoran Ye, Minsu Kim, Sanghyeok Choi, Nayeli Gast Zepeda, André Hottung, Jianan Zhou, Jieyi Bi, Yu Hu, Fei Liu, Hyeonah Kim, Jiwoo Son, Haeyeon Kim, Davide Angioni, Wouter Kool, Zhiguang Cao, Qingfu Zhang, Joungho Kim, Jie Zhang, Kijung Shin, Cathy Wu, Sungsoo Ahn, Guojie Song, Changhyun Kwon, Kevin Tierney, Lin Xie, Jinkyoo Park
To fill this gap, we introduce RL4CO, a unified and extensive benchmark with in-depth library coverage of 23 state-of-the-art methods and more than 20 CO problems.
1 code implementation • 4 Jun 2023 • Soo Yong Lee, Fanchen Bu, Jaemin Yoo, Kijung Shin
AERO-GNN provably mitigates the proposed problems of deep graph attention, which is further empirically demonstrated with (a) its adaptive and less smooth attention functions and (b) higher performance at deep layers (up to 64).
1 code implementation • 12 May 2022 • Fanchen Bu, Dong Eui Chang
In particular, inspired by a numerical integration method on manifolds called Feedback Integrators, we propose to instantiate it on the tangent bundle of the Stiefel manifold for the first time.
no code implementations • 8 Jul 2020 • Fanchen Bu, Dong Eui Chang
Experience replay enables online reinforcement learning agents to store and reuse the previous experiences of interacting with the environment.