no code implementations • 12 Jul 2023 • Shalini Shrivastava, Vivek Saraswat, Gayatri Dash, Samyak Chakrabarty, Udayan Ganguly
Training deep neural networks (DNNs) is computationally intensive but arrays of non-volatile memories like Charge Trap Flash (CTF) can accelerate DNN operations using in-memory computing.
no code implementations • 9 Mar 2020 • Varun Bhatt, Shalini Shrivastava, Tanmay Chavan, Udayan Ganguly
The in-memory computing paradigm with emerging memory devices has been recently shown to be a promising way to accelerate deep learning.