no code implementations • 24 Mar 2024 • Dongqi Fu, Zhigang Hua, Yan Xie, Jin Fang, Si Zhang, Kaan Sancak, Hao Wu, Andrey Malevich, Jingrui He, Bo Long
Therefore, mini-batch training for graph transformers is a promising direction, but limited samples in each mini-batch can not support effective dense attention to encode informative representations.
1 code implementation • 17 Oct 2021 • Muhammed Fatih Balin, Kaan Sancak, Ümit V. Çatalyürek
Full batch training of Graph Convolutional Network (GCN) models is not feasible on a single GPU for large graphs containing tens of millions of vertices or more.
1 code implementation • 16 Sep 2020 • Abdurrahman Yaşar, Muhammed Fatih Balin, Xiaojing An, Kaan Sancak, Ümit V. Çatalyürek
More specifically, in this work, we address the problem of symmetric rectilinear partitioning of a square matrix.
Data Structures and Algorithms