no code implementations • 19 Aug 2024 • Kun Wu, Jeongmin Brian Park, Xiaofan Zhang, Mert Hidayetoğlu, Vikram Sharma Mailthody, Sitao Huang, Steven Sam Lumetta, Wen-mei Hwu
Results demonstrate that TBA effectively reduces 47% of the activation peak memory usage.
no code implementations • 21 Jul 2024 • Jeongmin Brian Park, Kun Wu, Vikram Sharma Mailthody, Zaid Quresh, Scott Mahlke, Wen-mei Hwu
Graph Neural Networks (GNNs) are widely used today in recommendation systems, fraud detection, and node/link classification tasks.
1 code implementation • 28 Jun 2023 • Jeongmin Brian Park, Vikram Sharma Mailthody, Zaid Qureshi, Wen-mei Hwu
To address these issues, we propose the GPU Initiated Direct Storage Access (GIDS) dataloader, to enable GPU-oriented GNN training for large-scale graphs while efficiently utilizing all hardware resources, such as CPU memory, storage, and GPU memory with a hybrid data placement strategy.
1 code implementation • 27 Feb 2023 • Arpandeep Khatua, Vikram Sharma Mailthody, Bhagyashree Taleka, Tengfei Ma, Xiang Song, Wen-mei Hwu
Most existing public datasets for GNNs are relatively small, which limits the ability of GNNs to generalize to unseen data.
1 code implementation • 28 Jul 2020 • Mert Hidayetoglu, Carl Pearson, Vikram Sharma Mailthody, Eiman Ebrahimi, JinJun Xiong, Rakesh Nagi, Wen-mei Hwu
This paper presents GPU performance optimization and scaling results for inference models of the Sparse Deep Neural Network Challenge 2020.
1 code implementation • 18 Jun 2020 • Hyoungwook Nam, Seung Byum Seo, Vikram Sharma Mailthody, Noor Michael, Lan Li
The model inductively generalizes on a variety of algorithmic tasks where state-of-the-art Transformer models fail to do so.