no code implementations • 10 Apr 2024 • Jae-Won Chung, Mosharaf Chowdhury
The enormous energy consumption of machine learning (ML) and generative AI workloads shows no sign of waning, taking a toll on operating costs, power delivery, and environmental sustainability.
2 code implementations • 12 Dec 2023 • Jae-Won Chung, Yile Gu, Insu Jang, Luoxi Meng, Nikhil Bansal, Mosharaf Chowdhury
Training large AI models on numerous GPUs consumes a massive amount of energy.
1 code implementation • 4 Mar 2023 • Zhenning Yang, Luoxi Meng, Jae-Won Chung, Mosharaf Chowdhury
Specifically, our solution observes real-time carbon intensity shifts during training and controls the energy consumption of GPUs, thereby reducing carbon footprint while maintaining training performance.
1 code implementation • 12 Aug 2022 • Jie You, Jae-Won Chung, Mosharaf Chowdhury
In this paper, we observe that common practices to improve training performance can often lead to inefficient energy usage.
1 code implementation • 24 Mar 2020 • Jae-Won Chung, Jae-Yun Kim, Soo-Mook Moon
That is, we propose ShadowTutor, a distributed video DNN inference framework that reduces the number of network transmissions through intermittent knowledge distillation to a student model.
Distributed, Parallel, and Cluster Computing