1 code implementation • 29 Jun 2023 • Hanqiu Chen, Hang Yang, Stephen Fitzmeyer, Cong Hao
Our methodology involves storing the whole dataset directly in INR format on a GPU, mitigating the significant data communication overhead between the CPU and GPU during training.
1 code implementation • 13 Apr 2023 • Hanqiu Chen, Cong Hao
The experiment results demonstrate that DGNN-Booster can achieve a speedup of up to 5. 6x compared to the CPU baseline (6226R), 8. 4x compared to the GPU baseline (A6000) and 2. 1x compared to the FPGA baseline without applying optimizations proposed in this paper.
no code implementations • 8 Oct 2022 • Hanqiu Chen, Yahya Alhinai, Yihan Jiang, Eunjee Na, Cong Hao
A variety of dynamic graph neural networks designed from algorithmic perspectives have succeeded in incorporating temporal information into graph processing.