Search Results for author: Sam Ade Jacobs

Found 6 papers, 5 papers with code

ZeRO++: Extremely Efficient Collective Communication for Giant Model Training

1 code implementation16 Jun 2023 Guanhua Wang, Heyang Qin, Sam Ade Jacobs, Connor Holmes, Samyam Rajbhandari, Olatunji Ruwase, Feng Yan, Lei Yang, Yuxiong He

Zero Redundancy Optimizer (ZeRO) has been used to train a wide range of large language models on massive GPUs clusters due to its ease of use, efficiency, and good scalability.

Quantization

DeepSpeed Ulysses: System Optimizations for Enabling Training of Extreme Long Sequence Transformer Models

1 code implementation25 Sep 2023 Sam Ade Jacobs, Masahiro Tanaka, Chengming Zhang, Minjia Zhang, Shuaiwen Leon Song, Samyam Rajbhandari, Yuxiong He

Computation in a typical Transformer-based large language model (LLM) can be characterized by batch size, hidden dimension, number of layers, and sequence length.

Language Modelling Large Language Model

Parallelizing Training of Deep Generative Models on Massive Scientific Datasets

2 code implementations5 Oct 2019 Sam Ade Jacobs, Brian Van Essen, David Hysom, Jae-Seung Yeom, Tim Moon, Rushil Anirudh, Jayaraman J. Thiagaranjan, Shusen Liu, Peer-Timo Bremer, Jim Gaffney, Tom Benson, Peter Robinson, Luc Peterson, Brian Spears

Training deep neural networks on large scientific data is a challenging task that requires enormous compute power, especially if no pre-trained models exist to initialize the process.

Distinguishing between Normal and Cancer Cells Using Autoencoder Node Saliency

no code implementations30 Jan 2019 Ya Ju Fan, Jonathan E. Allen, Sam Ade Jacobs, Brian C. Van Essen

With the trained autoencoder, we generate latent representations of a small dataset, containing pairs of normal and cancer cells of various tumor types.

Dimensionality Reduction

Cannot find the paper you are looking for? You can Submit a new open access paper.