Search Results for author: Youshan Miao

Found 10 papers, 2 papers with code

RPC Considered Harmful: Fast Distributed Deep Learning on RDMA

no code implementations22 May 2018 Jilong Xue, Youshan Miao, Cheng Chen, Ming Wu, Lintao Zhang, Lidong Zhou

Its computation is typically characterized by a simple tensor data abstraction to model multi-dimensional matrices, a data-flow graph to model computation, and iterative executions with relatively frequent synchronizations, thereby making it substantially different from Map/Reduce style distributed big data computation.

Towards Efficient Large-Scale Graph Neural Network Computing

no code implementations19 Oct 2018 Lingxiao Ma, Zhi Yang, Youshan Miao, Jilong Xue, Ming Wu, Lidong Zhou, Yafei Dai

This evolution has led to large graph-based irregular and sparse models that go beyond what existing deep learning frameworks are designed for.

graph partitioning Knowledge Graphs

Architectural Implications of Graph Neural Networks

no code implementations2 Sep 2020 Zhihui Zhang, Jingwen Leng, Lingxiao Ma, Youshan Miao, Chao Li, Minyi Guo

Graph neural networks (GNN) represent an emerging line of deep learning models that operate on graph structures.

CrossoverScheduler: Overlapping Multiple Distributed Training Applications in a Crossover Manner

no code implementations14 Mar 2021 Cheng Luo, Lei Qu, Youshan Miao, Peng Cheng, Yongqiang Xiong

Distributed deep learning workloads include throughput-intensive training tasks on the GPU clusters, where the Distributed Stochastic Gradient Descent (SGD) incurs significant communication delays after backward propagation, forces workers to wait for the gradient synchronization via a centralized parameter server or directly in decentralized workers.

Image Classification

PaGraph: Scaling GNN Training on Large Graphs via Computation-aware Caching and Partitioning

no code implementations Proceedings of the 11th ACM Symposium on Cloud Computing 2020 Zhiqi Lin, Cheng Li, Youshan Miao, Yunxin Liu, Yinlong Xu

Emerging graph neural networks (GNNs) have extended the successes of deep learning techniques against datasets like images and texts to more complex graph-structured data.

Adam Accumulation to Reduce Memory Footprints of both Activations and Gradients for Large-scale DNN Training

no code implementations31 May 2023 Yijia Zhang, Yibo Han, Shijie Cao, Guohao Dai, Youshan Miao, Ting Cao, Fan Yang, Ningyi Xu

We find that previous gradient accumulation reduces activation memory but fails to be compatible with gradient memory reduction due to a contradiction between preserving gradients and releasing gradients.

Tessel: Boosting Distributed Execution of Large DNN Models via Flexible Schedule Search

no code implementations26 Nov 2023 Zhiqi Lin, Youshan Miao, Guanbin Xu, Cheng Li, Olli Saarikivi, Saeed Maleki, Fan Yang

This paper presents Tessel, an automated system that searches for efficient schedules for distributed DNN training and inference for diverse operator placement strategies.

Cannot find the paper you are looking for? You can Submit a new open access paper.