Search Results for author: Abhinav Venigalla

Found 5 papers, 2 papers with code

Representation range needs for 16-bit neural network training

no code implementations29 Mar 2021 Valentina Popescu, Abhinav Venigalla, Di wu, Robert Schreiber

While neural networks have been trained using IEEE-754 binary32 arithmetic, the rapid growth of computational demands in deep learning has boosted interest in faster, low precision training.

Adaptive Braking for Mitigating Gradient Delay

no code implementations2 Jul 2020 Abhinav Venigalla, Atli Kosson, Vitaliy Chiley, Urs Köster

Neural network training is commonly accelerated by using multiple synchronized workers to compute gradient updates in parallel.

Cannot find the paper you are looking for? You can Submit a new open access paper.