Search Results for author: Joshua Romero

Found 2 papers, 1 papers with code

Exascale Deep Learning for Scientific Inverse Problems

no code implementations24 Sep 2019 Nouamane Laanait, Joshua Romero, Junqi Yin, M. Todd Young, Sean Treichler, Vitalii Starchenko, Albina Borisevich, Alex Sergeev, Michael Matheson

We introduce novel communication strategies in synchronous distributed Deep Learning consisting of decentralized gradient reduction orchestration and computational graph-aware grouping of gradient tensors.

Materials Imaging

Exascale Deep Learning for Climate Analytics

3 code implementations3 Oct 2018 Thorsten Kurth, Sean Treichler, Joshua Romero, Mayur Mudigonda, Nathan Luehr, Everett Phillips, Ankur Mahesh, Michael Matheson, Jack Deslippe, Massimiliano Fatica, Prabhat, Michael Houston

The Tiramisu network scales to 5300 P100 GPUs with a sustained throughput of 21. 0 PF/s and parallel efficiency of 79. 0%.

Distributed, Parallel, and Cluster Computing

Cannot find the paper you are looking for? You can Submit a new open access paper.