Search Results for author: Dhabaleswar Panda

Found 2 papers, 1 papers with code

The Case for Co-Designing Model Architectures with Hardware

1 code implementation25 Jan 2024 Quentin Anthony, Jacob Hatef, Deepak Narayanan, Stella Biderman, Stas Bekman, Junqi Yin, Aamir Shafi, Hari Subramoni, Dhabaleswar Panda

While GPUs are responsible for training the vast majority of state-of-the-art deep learning models, the implications of their architecture are often overlooked when designing new deep learning (DL) models.

MCR-DL: Mix-and-Match Communication Runtime for Deep Learning

no code implementations15 Mar 2023 Quentin Anthony, Ammar Ahmad Awan, Jeff Rasley, Yuxiong He, Aamir Shafi, Mustafa Abduljabbar, Hari Subramoni, Dhabaleswar Panda

However, such distributed DL parallelism strategies require a varied mixture of collective and point-to-point communication operations across a broad range of message sizes and scales.

Cannot find the paper you are looking for? You can Submit a new open access paper.