Search Results for author: Abhisek Kundu

Found 11 papers, 0 papers with code

K-TanH: Efficient TanH For Deep Learning

no code implementations17 Sep 2019 Abhisek Kundu, Alex Heinecke, Dhiraj Kalamkar, Sudarshan Srinivasan, Eric C. Qin, Naveen K. Mellempudi, Dipankar Das, Kunal Banerjee, Bharat Kaul, Pradeep Dubey

We propose K-TanH, a novel, highly accurate, hardware efficient approximation of popular activation function TanH for Deep Learning.

Ternary Residual Networks

no code implementations15 Jul 2017 Abhisek Kundu, Kunal Banerjee, Naveen Mellempudi, Dheevatsa Mudigere, Dipankar Das, Bharat Kaul, Pradeep Dubey

Aided by such an elegant trade-off between accuracy and compute, the 8-2 model (8-bit activations, ternary weights), enhanced by ternary residual edges, turns out to be sophisticated enough to achieve very high accuracy ($\sim 1\%$ drop from our FP-32 baseline), despite $\sim 1. 6\times$ reduction in model size, $\sim 26\times$ reduction in number of multiplications, and potentially $\sim 2\times$ power-performance gain comparing to 8-8 representation, on the state-of-the-art deep network ResNet-101 pre-trained on ImageNet dataset.

Ternary Neural Networks with Fine-Grained Quantization

no code implementations2 May 2017 Naveen Mellempudi, Abhisek Kundu, Dheevatsa Mudigere, Dipankar Das, Bharat Kaul, Pradeep Dubey

We address this by fine-tuning Resnet-50 with 8-bit activations and ternary weights at $N=64$, improving the Top-1 accuracy to within $4\%$ of the full precision result with $<30\%$ additional training overhead.

Quantization

Mixed Low-precision Deep Learning Inference using Dynamic Fixed Point

no code implementations31 Jan 2017 Naveen Mellempudi, Abhisek Kundu, Dipankar Das, Dheevatsa Mudigere, Bharat Kaul

We propose a cluster-based quantization method to convert pre-trained full precision weights into ternary weights with minimal impact on the accuracy.

Quantization

A Randomized Rounding Algorithm for Sparse PCA

no code implementations13 Aug 2015 Kimon Fountoulakis, Abhisek Kundu, Eugenia-Maria Kontopoulou, Petros Drineas

We present and analyze a simple, two-step algorithm to approximate the optimal solution of the sparse PCA problem.

Approximating Sparse PCA from Incomplete Data

no code implementations NeurIPS 2015 Abhisek Kundu, Petros Drineas, Malik Magdon-Ismail

We show that for a wide class of optimization problems, if the sketch is close (in the spectral norm) to the original data matrix, then one can recover a near optimal solution to the optimization problem by using the sketch.

Recovering PCA from Hybrid-$(\ell_1,\ell_2)$ Sparse Sampling of Data Elements

no code implementations2 Mar 2015 Abhisek Kundu, Petros Drineas, Malik Magdon-Ismail

This paper addresses how well we can recover a data matrix when only given a few of its elements.

Identifying Influential Entries in a Matrix

no code implementations14 Oct 2013 Abhisek Kundu, Srinivas Nambirajan, Petros Drineas

For any matrix A in R^(m x n) of rank \rho, we present a probability distribution over the entries of A (the element-wise leverage scores of equation (2)) that reveals the most influential entries in the matrix.

Matrix Completion

Cannot find the paper you are looking for? You can Submit a new open access paper.