Leapfrogging for parallelism in deep neural networks

15 Jan 2018  ·  Yatin Saraiya ·

We present a technique, which we term leapfrogging, to parallelize back- propagation in deep neural networks. We show that this technique yields a savings of $1-1/k$ of a dominant term in backpropagation, where k is the number of threads (or gpus).

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here