Search Results for author: Shervin Manzuri Shalmani

Found 2 papers, 0 papers with code

CacheNet: A Model Caching Framework for Deep Learning Inference on the Edge

no code implementations3 Jul 2020 Yihao Fang, Shervin Manzuri Shalmani, Rong Zheng

Inference of uncompressed large scale DNN models can only run in the cloud with extra communication latency back and forth between cloud and end devices, while compressed DNN models achieve real-time inference on end devices at the price of lower predictive accuracy.

Image Classification speech-recognition +1

Cannot find the paper you are looking for? You can Submit a new open access paper.