Search Results for author: Sarada Krithivasan

Found 3 papers, 0 papers with code

InterTrain: Accelerating DNN Training using Input Interpolation

no code implementations29 Sep 2021 Sarada Krithivasan, Swagath Venkataramani, Sanchari Sen, Anand Raghunathan

This is because the efficacy of learning on interpolated inputs is reduced by the interference between the forward/backward propagation of their constituent inputs.

Accelerating DNN Training through Selective Localized Learning

no code implementations1 Jan 2021 Sarada Krithivasan, Sanchari Sen, Swagath Venkataramani, Anand Raghunathan

The trend in the weight updates made to the transition layer across epochs is used to determine how the boundary betweenSGD and localized updates is shifted in future epochs.

Sparsity Turns Adversarial: Energy and Latency Attacks on Deep Neural Networks

no code implementations14 Jun 2020 Sarada Krithivasan, Sanchari Sen, Anand Raghunathan

We also evaluate the impact of the attack on a sparsity-optimized DNN accelerator and demonstrate degradations up to 1. 59x in latency, and also study the performance of the attack on a sparsity-optimized general-purpose processor.

Computational Efficiency Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.