Symphony: Optimized DNN Model Serving using Deferred Batch Scheduling

Having large batch sizes is one of the most critical aspects of increasing the accelerator efficiency and the performance of DNN model inference. However, existing model serving systems cannot achieve adequate batch sizes while meeting latency objectives as these systems eagerly dispatch requests to accelerators to minimize the accelerator idle time. We propose Symphony, a DNN serving system that explores deferred batch scheduling to optimize system efficiency and throughput. Further, unlike other prior systems, Symphony's GPU usage is load-proportional: it consolidates workloads on the appropriate number of GPUs and works smoothly with cluster auto-scaling tools. Symphony consists of two core design points. First, Symphony defines a schedulable window in which a batch of inference requests can be dispatched. This window is computed in order to improve accelerator efficiency while meeting the request's SLO. Second, Symphony implements a scalable, low-latency, fine-grained coordination scheme across accelerators to dispatch and execute requests in the schedulable window. Through extensive scheduler-only benchmarks, we demonstrate that Symphony can schedule millions of requests per second and coordinate thousands of GPUs while also enabling robust autoscaling that adapts to workload changes. Symphony outperforms prior systems by achieving 5x higher goodput when given the same number of GPUs and 60% reduction in GPUs when given the same workload.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here