Search Results for author: Zhenkun Cai

Found 7 papers, 3 papers with code

4DBInfer: A 4D Benchmarking Toolbox for Graph-Centric Predictive Modeling on Relational DBs

1 code implementation28 Apr 2024 Minjie Wang, Quan Gan, David Wipf, Zhenkun Cai, Ning li, Jianheng Tang, Yanlin Zhang, Zizhao Zhang, Zunyao Mao, Yakun Song, Yanbo Wang, Jiahang Li, Han Zhang, Guang Yang, Xiao Qin, Chuan Lei, Muhan Zhang, Weinan Zhang, Christos Faloutsos, Zheng Zhang

Although RDBs store vast amounts of rich, informative data spread across interconnected tables, the progress of predictive machine learning models as applied to such tasks arguably falls well behind advances in other domains such as computer vision or natural language processing.

Benchmarking

MuseGNN: Interpretable and Convergent Graph Neural Network Layers at Scale

no code implementations19 Oct 2023 Haitian Jiang, Renjie Liu, Xiao Yan, Zhenkun Cai, Minjie Wang, David Wipf

Among the many variants of graph neural network (GNN) architectures capable of modeling data with cross-instance relations, an important subclass involves layers designed such that the forward pass iteratively reduces a graph-regularized energy function of interest.

Node Classification

DGI: Easy and Efficient Inference for GNNs

no code implementations28 Nov 2022 Peiqi Yin, Xiao Yan, Jinjing Zhou, Qiang Fu, Zhenkun Cai, James Cheng, Bo Tang, Minjie Wang

In this paper, we develop Deep Graph Inference (DGI) -- a system for easy and efficient GNN model inference, which automatically translates the training code of a GNN model for layer-wise execution.

DGCL: an efficient communication library for distributed GNN training

1 code implementation Proceedings of the Sixteenth European Conference on Computer Systems 2021 Zhenkun Cai, Xiao Yan, Yidi Wu, Kaihao Ma, James Cheng, Fan Yu

Graph neural networks (GNNs) have gained increasing popularity in many areas such as e-commerce, social networks and bio-informatics.

Elastic Deep Learning in Multi-Tenant GPU Clusters

no code implementations IEEE Transactions on Parallel and Distributed Systems 2021 Yidi Wu, Kaihao Ma, Xiao Yan, Zhi Liu, Zhenkun Cai, Yuzhen Huang, James Cheng, Han Yuan, Fan Yu

We study how to support elasticity, that is, the ability to dynamically adjust the parallelism (i. e., the number of GPUs), for deep neural network (DNN) training in a GPU cluster.

Management Scheduling

TensorOpt: Exploring the Tradeoffs in Distributed DNN Training with Auto-Parallelism

1 code implementation16 Apr 2020 Zhenkun Cai, Kaihao Ma, Xiao Yan, Yidi Wu, Yuzhen Huang, James Cheng, Teng Su, Fan Yu

A good parallelization strategy can significantly improve the efficiency or reduce the cost for the distributed training of deep neural networks (DNNs).

Cannot find the paper you are looking for? You can Submit a new open access paper.