Search Results for author: Mike O'Connor

Found 3 papers, 0 papers with code

Buddy Compression: Enabling Larger Memory for Deep Learning and HPC Workloads on GPUs

no code implementations6 Mar 2019 Esha Choukse, Michael Sullivan, Mike O'Connor, Mattan Erez, Jeff Pool, David Nellans, Steve Keckler

However, GPU device memory tends to be relatively small and the memory capacity can not be increased by the user.

Hardware Architecture

Compressing DMA Engine: Leveraging Activation Sparsity for Training Deep Neural Networks

no code implementations3 May 2017 Minsoo Rhu, Mike O'Connor, Niladrish Chatterjee, Jeff Pool, Stephen W. Keckler

Popular deep learning frameworks require users to fine-tune their memory usage so that the training data of a deep neural network (DNN) fits within the GPU physical memory.

Cannot find the paper you are looking for? You can Submit a new open access paper.