no code implementations • NeurIPS 2020 • Morteza Ramezani, Weilin Cong, Mehrdad Mahdavi, Anand Sivasubramaniam, Mahmut Kandemir
Sampling-based methods promise scalability improvements when paired with stochastic gradient descent in training Graph Convolutional Networks (GCNs).
no code implementations • 24 Jun 2020 • Weilin Cong, Rana Forsati, Mahmut Kandemir, Mehrdad Mahdavi
In this paper, we theoretically analyze the variance of sampling methods and show that, due to the composite structure of empirical risk, the variance of any sampling method can be decomposed into \textit{embedding approximation variance} in the forward stage and \textit{stochastic gradient variance} in the backward stage that necessities mitigating both types of variance to obtain faster convergence rate.
1 code implementation • 18 May 2019 • George Kesidis, Nader Alfares, Xi Li, Bhuvan Urgaonkar, Mahmut Kandemir, Takis Konstantopoulos
We consider a content-caching system thatis shared by a number of proxies.
Performance Networking and Internet Architecture