Search Results for author: Tim Moon

Found 3 papers, 2 papers with code

Improving Strong-Scaling of CNN Training by Exploiting Finer-Grained Parallelism

no code implementations15 Mar 2019 Nikoli Dryden, Naoya Maruyama, Tom Benson, Tim Moon, Marc Snir, Brian Van Essen

We also see an emerging need to handle datasets with very large samples, where memory requirements for training are large.

Image Classification

Parallelizing Training of Deep Generative Models on Massive Scientific Datasets

2 code implementations5 Oct 2019 Sam Ade Jacobs, Brian Van Essen, David Hysom, Jae-Seung Yeom, Tim Moon, Rushil Anirudh, Jayaraman J. Thiagaranjan, Shusen Liu, Peer-Timo Bremer, Jim Gaffney, Tom Benson, Peter Robinson, Luc Peterson, Brian Spears

Training deep neural networks on large scientific data is a challenging task that requires enormous compute power, especially if no pre-trained models exist to initialize the process.

Cannot find the paper you are looking for? You can Submit a new open access paper.