Search Results for author: Mahmut Kandemir

Found 3 papers, 1 papers with code

GCN meets GPU: Decoupling “When to Sample” from “How to Sample”

no code implementations NeurIPS 2020 Morteza Ramezani, Weilin Cong, Mehrdad Mahdavi, Anand Sivasubramaniam, Mahmut Kandemir

Sampling-based methods promise scalability improvements when paired with stochastic gradient descent in training Graph Convolutional Networks (GCNs).

Minimal Variance Sampling with Provable Guarantees for Fast Training of Graph Neural Networks

no code implementations24 Jun 2020 Weilin Cong, Rana Forsati, Mahmut Kandemir, Mehrdad Mahdavi

In this paper, we theoretically analyze the variance of sampling methods and show that, due to the composite structure of empirical risk, the variance of any sampling method can be decomposed into \textit{embedding approximation variance} in the forward stage and \textit{stochastic gradient variance} in the backward stage that necessities mitigating both types of variance to obtain faster convergence rate.

On a caching system with object sharing

1 code implementation18 May 2019 George Kesidis, Nader Alfares, Xi Li, Bhuvan Urgaonkar, Mahmut Kandemir, Takis Konstantopoulos

We consider a content-caching system thatis shared by a number of proxies.

Performance Networking and Internet Architecture

Cannot find the paper you are looking for? You can Submit a new open access paper.