Search Results for author: Anand Jayarajan

Found 2 papers, 1 papers with code

FPRaker: A Processing Element For Accelerating Neural Network Training

no code implementations15 Oct 2020 Omar Mohamed Awad, Mostafa Mahmoud, Isak Edo, Ali Hadi Zadeh, Ciaran Bannon, Anand Jayarajan, Gennady Pekhimenko, Andreas Moshovos

We demonstrate that FPRaker can be used to compose an accelerator for training and that it can improve performance and energy efficiency compared to using conventional floating-point units under ISO-compute area constraints.

Quantization

Priority-based Parameter Propagation for Distributed DNN Training

1 code implementation10 May 2019 Anand Jayarajan, Jinliang Wei, Garth Gibson, Alexandra Fedorova, Gennady Pekhimenko

Data parallel training is widely used for scaling distributed deep neural network (DNN) training.

Cannot find the paper you are looking for? You can Submit a new open access paper.