Search Results for author: Farabi Mahmud

Found 2 papers, 0 papers with code

ADA-GP: Accelerating DNN Training By Adaptive Gradient Prediction

no code implementations22 May 2023 Vahid Janfaza, Shantanu Mandal, Farabi Mahmud, Abdullah Muzahid

Neural network training is inherently sequential where the layers finish the forward propagation in succession, followed by the calculation and back-propagation of gradients (based on a loss function) starting from the last layer.

MERCURY: Accelerating DNN Training By Exploiting Input Similarity

no code implementations28 Oct 2021 Vahid Janfaza, Kevin Weston, Moein Razavi, Shantanu Mandal, Farabi Mahmud, Alex Hilty, Abdullah Muzahid

If the Signature of a new input vector matches that of an already existing vector in the MCACHE, the two vectors are found to have similarities.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.