Search Results for author: Dharma Teja Vooturi

Found 5 papers, 0 papers with code

AUTOSPARSE: Towards Automated Sparse Training of Deep Neural Networks

no code implementations14 Apr 2023 Abhisek Kundu, Naveen K. Mellempudi, Dharma Teja Vooturi, Bharat Kaul, Pradeep Dubey

We integrated GA with the latest learnable pruning methods to create an automated sparse training algorithm called AutoSparse, which achieves better accuracy and/or training/inference FLOPS reduction than existing learnable pruning methods for sparse ResNet50 and MobileNetV1 on ImageNet-1K: AutoSparse achieves (2x, 7x) reduction in (training, inference) FLOPS for ResNet50 on ImageNet at 80% sparsity.

Ramanujan Bipartite Graph Products for Efficient Block Sparse Neural Networks

no code implementations24 Jun 2020 Dharma Teja Vooturi, Girish Varma, Kishore Kothapalli

We also propose to use products of Ramanujan graphs which gives the best connectivity for a given level of sparsity.

Image Classification

Hierarchical Block Sparse Neural Networks

no code implementations10 Aug 2018 Dharma Teja Vooturi, Dheevatsa Mudigere, Sasikanth Avancha

In this work, we jointly address both accuracy and performance of sparse DNNs using our proposed class of sparse neural networks called HBsNN (Hierarchical Block sparse Neural Networks).

Efficient Inferencing of Compressed Deep Neural Networks

no code implementations1 Nov 2017 Dharma Teja Vooturi, Saurabh Goyal, Anamitra R. Choudhury, Yogish Sabharwal, Ashish Verma

Large number of weights in deep neural networks makes the models difficult to be deployed in low memory environments such as, mobile phones, IOT edge devices as well as "inferencing as a service" environments on cloud.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.