Search Results for author: Mahdi Torabzadehkashi

Found 2 papers, 0 papers with code

HyperTune: Dynamic Hyperparameter Tuning For Efficient Distribution of DNN Training Over Heterogeneous Systems

no code implementations16 Jul 2020 Ali HeydariGorji, Siavash Rezaei, Mahdi Torabzadehkashi, Hossein Bobarshad, Vladimir Alves, Pai H. Chou

Distributed training is a novel approach to accelerate Deep Neural Networks (DNN) training, but common training libraries fall short of addressing the distributed cases with heterogeneous processors or the cases where the processing nodes get interrupted by other workloads.

Federated Learning Image Classification

STANNIS: Low-Power Acceleration of Deep Neural Network Training Using Computational Storage

no code implementations17 Feb 2020 Ali HeydariGorji, Mahdi Torabzadehkashi, Siavash Rezaei, Hossein Bobarshad, Vladimir Alves, Pai H. Chou

This paper proposes a framework for distributed, in-storage training of neural networks on clusters of computational storage devices.

Cannot find the paper you are looking for? You can Submit a new open access paper.