no code implementations • 29 Sep 2021 • Sarada Krithivasan, Swagath Venkataramani, Sanchari Sen, Anand Raghunathan
This is because the efficacy of learning on interpolated inputs is reduced by the interference between the forward/backward propagation of their constituent inputs.
no code implementations • 1 Jan 2021 • Sarada Krithivasan, Sanchari Sen, Swagath Venkataramani, Anand Raghunathan
The trend in the weight updates made to the transition layer across epochs is used to determine how the boundary betweenSGD and localized updates is shifted in future epochs.
no code implementations • 14 Jun 2020 • Sarada Krithivasan, Sanchari Sen, Anand Raghunathan
We also evaluate the impact of the attack on a sparsity-optimized DNN accelerator and demonstrate degradations up to 1. 59x in latency, and also study the performance of the attack on a sparsity-optimized general-purpose processor.