Search Results for author: Dhruv Garg

Found 2 papers, 1 papers with code

SuperServe: Fine-Grained Inference Serving for Unpredictable Workloads

no code implementations27 Dec 2023 Alind Khare, Dhruv Garg, Sukrit Kalra, Snigdha Grandhi, Ion Stoica, Alexey Tumanov

Serving models under such conditions requires these systems to strike a careful balance between the latency and accuracy requirements of the application and the overall efficiency of utilization of scarce resources.

Scheduling

Flame: Simplifying Topology Extension in Federated Learning

1 code implementation9 May 2023 Harshit Daga, Jaemin Shin, Dhruv Garg, Ada Gavrilovska, Myungjin Lee, Ramana Rao Kompella

We present Flame, a new system that provides flexibility of the topology configuration of distributed FL applications around the specifics of a particular deployment context, and is easily extensible to support new FL architectures.

Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.