no code implementations • 16 Sep 2021 • Anup Sarma, Sonali Singh, Huaipan Jiang, Ashutosh Pattnaik, Asit K Mishra, Vijaykrishnan Narayanan, Mahmut T Kandemir, Chita R Das
By exploiting sparsity in both the forward and backward passes, speedup improvements range from 1. 68$\times$ to 3. 30$\times$ over the sparsity-agnostic baseline execution.
no code implementations • NeurIPS 2021 • Anup Sarma, Sonali Singh, Huaipan Jiang, Rui Zhang, Mahmut T Kandemir, Chita R Das
Recurrent Neural Networks (RNNs), more specifically their Long Short-Term Memory (LSTM) variants, have been widely used as a deep learning tool for tackling sequence-based learning tasks in text and speech.