no code implementations • 23 Apr 2023 • Ghina Al-Atat, Andrea Fresa, Adarsh Prasad Behera, Vishnu Narayanan Moothedath, James Gross, Jaya Prakash Champati
Depending on the application, if the inference provided by the local algorithm is incorrect or further assistance is required from large DL models on edge or cloud, only then the ED offloads the data sample.
no code implementations • 3 Apr 2023 • Vishnu Narayanan Moothedath, Jaya Prakash Champati, James Gross
In order to get the best out of both worlds, i. e., the benefits of doing inference on the ED and the benefits of doing inference on ES, we explore the idea of Hierarchical Inference (HI), wherein S-ML inference is only accepted when it is correct, otherwise the data sample is offloaded for L-ML inference.
no code implementations • 14 Nov 2022 • SM Zobaed, Ali Mokhtari, Jaya Prakash Champati, Mathieu Kourouma, Mohsen Amini Salehi
We propose an efficient NN model management framework, called Edge-MultiAI, that ushers the NN models of the DL applications into the edge memory such that the degree of multi-tenancy and the number of warm-starts are maximized.
no code implementations • 21 Dec 2021 • Andrea Fresa, Jaya Prakash Champati
With the emergence of edge computing, the problem of offloading jobs between an Edge Device (ED) and an Edge Server (ES) received significant attention in the past.
no code implementations • 19 Jan 2020 • Jaya Prakash Champati, Ramana R. Avula, Tobias J. Oechtering, James Gross
There has been a significant research effort in optimizing this metric in communication and networking systems under different settings.