no code implementations • 17 Feb 2024 • Fedor Borisyuk, Shihai He, Yunbo Ouyang, Morteza Ramezani, Peng Du, Xiaochen Hou, Chengming Jiang, Nitin Pasumarthy, Priya Bannur, Birjodh Tiwana, Ping Liu, Siddharth Dangi, Daqi Sun, Zhoutao Pei, Xiao Shi, Sirou Zhu, Qianqi Shen, Kuang-Hsuan Lee, David Stein, Baolei Li, Haichao Wei, Amol Ghoting, Souvik Ghosh
In this paper, we present LiGNN, a deployed large-scale Graph Neural Networks (GNNs) Framework.
no code implementations • ICLR 2022 • Morteza Ramezani, Weilin Cong, Mehrdad Mahdavi, Mahmut T. Kandemir, Anand Sivasubramaniam
To solve the performance degradation, we propose to apply $\text{{Global Server Corrections}}$ on the server to refine the locally learned models.
1 code implementation • NeurIPS 2021 • Weilin Cong, Morteza Ramezani, Mehrdad Mahdavi
Graph Convolutional Networks (GCNs) are known to suffer from performance degradation as the number of layers increases, which is usually attributed to over-smoothing.
1 code implementation • 3 Mar 2021 • Weilin Cong, Morteza Ramezani, Mehrdad Mahdavi
In this paper, we describe and analyze a general doubly variance reduction schema that can accelerate any sampling method under the memory budget.
no code implementations • 1 Jan 2021 • Weilin Cong, Morteza Ramezani, Mehrdad Mahdavi
In this paper, we describe and analyze a general \textbf{\textit{doubly variance reduction}} schema that can accelerate any sampling method under the memory budget.
no code implementations • NeurIPS 2020 • Morteza Ramezani, Weilin Cong, Mehrdad Mahdavi, Anand Sivasubramaniam, Mahmut Kandemir
Sampling-based methods promise scalability improvements when paired with stochastic gradient descent in training Graph Convolutional Networks (GCNs).