Perseus: Reducing Energy Bloat in Large Model Training

12 Dec 2023  ·  Jae-Won Chung, Yile Gu, Insu Jang, Luoxi Meng, Nikhil Bansal, Mosharaf Chowdhury ·

Training large AI models on numerous GPUs consumes a massive amount of energy, making power delivery one of the largest limiting factors in building and operating datacenters for AI workloads. However, we observe that not all energy consumed during training directly contributes to end-to-end throughput, and a significant portion can be removed without slowing down training, which we call energy bloat. In this work, we identify two independent sources of energy bloat in large model training and propose Perseus, a training system that mitigates both. To do this, Perseus obtains the "iteration time-energy" Pareto frontier of any large model training job using an efficient graph cut-based algorithm and schedules the energy consumption of computations across time to remove both types of energy bloat. Evaluation on large models including GPT-3 and Bloom shows that Perseus reduces the energy consumption of large model training by up to 30% without any throughput loss or hardware modification, enabling energy reduction -- and therefore cost savings -- otherwise unattainable before.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods