Straggler-aware Distributed Learning: Communication Computation Latency Trade-off

10 Apr 2020  ·  Emre Ozfatura, Sennur Ulukus, Deniz Gunduz ·

When gradient descent (GD) is scaled to many parallel workers for large scale machine learning problems, its per-iteration computation time is limited by the straggling workers. Straggling workers can be tolerated by assigning redundant computations and coding across data and computations, but in most existing schemes, each non-straggling worker transmits one message per iteration to the parameter server (PS) after completing all its computations. Imposing such a limitation results in two main drawbacks; over-computation due to inaccurate prediction of the straggling behaviour, and under-utilization due to treating workers as straggler/non-straggler and discarding partial computations carried out by stragglers. In this paper, to overcome these drawbacks, we consider multi-message communication (MMC) by allowing multiple computations to be conveyed from each worker per iteration, and design straggler avoidance techniques accordingly. Then, we analyze how the proposed designs can be employed efficiently to seek a balance between the computation and communication latency to minimize the overall latency. Furthermore, through extensive simulations, both model-based and real implementation on Amazon EC2 servers, we identify the advantages and disadvantages of these designs in different settings, and demonstrate that MMC can help improve upon existing straggler avoidance schemes.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here