Search Results for author: Siddhartha Rao Kamalakara

Found 3 papers, 1 papers with code

Exploring Low Rank Training of Deep Neural Networks

no code implementations27 Sep 2022 Siddhartha Rao Kamalakara, Acyr Locatelli, Bharat Venkitesh, Jimmy Ba, Yarin Gal, Aidan N. Gomez

Training deep neural networks in low rank, i. e. with factorised layers, is of particular interest to the community: it offers efficiency over unfactorised training in terms of both memory consumption and training time.

Scalable Training of Language Models using JAX pjit and TPUv4

no code implementations13 Apr 2022 Joanna Yoo, Kuba Perlin, Siddhartha Rao Kamalakara, João G. M. Araújo

Modern large language models require distributed training strategies due to their size.

Learning Sparse Networks Using Targeted Dropout

2 code implementations31 May 2019 Aidan N. Gomez, Ivan Zhang, Siddhartha Rao Kamalakara, Divyam Madaan, Kevin Swersky, Yarin Gal, Geoffrey E. Hinton

Before computing the gradients for each weight update, targeted dropout stochastically selects a set of units or weights to be dropped using a simple self-reinforcing sparsity criterion and then computes the gradients for the remaining weights.

Network Pruning Neural Network Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.