Search Results for author: Mark Kurtz

Found 4 papers, 1 papers with code

Inducing and Exploiting Activation Sparsity for Fast Inference on Deep Neural Networks

no code implementations ICML 2020 Mark Kurtz, Justin Kopinsky, Rati Gelashvili, Alexander Matveev, John Carr, Michael Goin, William Leiserson, Sage Moore, Nir Shavit, Dan Alistarh

In this paper, we present an in-depth analysis of methods for maximizing the sparsity of the activations in a trained neural network, and show that, when coupled with an efficient sparse-input convolution algorithm, we can leverage this sparsity for significant performance gains.

Image Classification

Sparse*BERT: Sparse Models are Robust

no code implementations25 May 2022 Daniel Campos, Alexandre Marques, Tuan Nguyen, Mark Kurtz, ChengXiang Zhai

Our experimentation shows that models that are pruned during pretraining using general domain masked language models can transfer to novel domains and tasks without extensive hyperparameter exploration or specialized approaches.


How Well Do Sparse Imagenet Models Transfer?

1 code implementation CVPR 2022 Eugenia Iofinova, Alexandra Peste, Mark Kurtz, Dan Alistarh

Transfer learning is a classic paradigm by which models pretrained on large "upstream" datasets are adapted to yield good results on "downstream" specialized datasets.

Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.