Search Results for author: Mingchao Yu

Found 4 papers, 0 papers with code

Train Where the Data is: A Case for Bandwidth Efficient Coded Training

no code implementations22 Oct 2019 Zhifeng Lin, Krishna Giri Narra, Mingchao Yu, Salman Avestimehr, Murali Annavaram

Most of the model training is performed on high performance compute nodes and the training data is stored near these nodes for faster training.

Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training

no code implementations NeurIPS 2018 Youjie Li, Mingchao Yu, Songze Li, Salman Avestimehr, Nam Sung Kim, Alexander Schwing

Distributed training of deep nets is an important technique to address some of the present day computing challenges like memory consumption and computational demands.

PolyShard: Coded Sharding Achieves Linearly Scaling Efficiency and Security Simultaneously

no code implementations27 Sep 2018 Songze Li, Mingchao Yu, Chien-Sheng Yang, A. Salman Avestimehr, Sreeram Kannan, Pramod Viswanath

In particular, we propose PolyShard: ``polynomially coded sharding'' scheme that achieves information-theoretic upper bounds on the efficiency of the storage, system throughput, as well as on trust, thus enabling a truly scalable system.

Cryptography and Security Distributed, Parallel, and Cluster Computing Information Theory Information Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.