Search Results for author: Maeesha Binte Hashem

Found 3 papers, 0 papers with code

ADC/DAC-Free Analog Acceleration of Deep Neural Networks with Frequency Transformation

no code implementations4 Sep 2023 Nastaran Darabi, Maeesha Binte Hashem, Hongyi Pan, Ahmet Cetin, Wilfred Gomes, Amit Ranjan Trivedi

Moreover, our novel array micro-architecture enables adaptive stitching of cells column-wise and row-wise, thereby facilitating perfect parallelism in computations.

Computational Efficiency Model Compression

Towards Model-Size Agnostic, Compute-Free, Memorization-based Inference of Deep Learning

no code implementations14 Jul 2023 Davide Giacomini, Maeesha Binte Hashem, Jeremiah Suarez, Swarup Bhunia, Amit Ranjan Trivedi

Specifically, our work capitalizes on the inference mechanism of the recurrent attention model (RAM), where only a small window of input domain (glimpse) is processed in a one time step, and the outputs from multiple glimpses are combined through a hidden vector to determine the overall classification output of the problem.

Bayesian Optimization Memorization +2

Memory-Immersed Collaborative Digitization for Area-Efficient Compute-in-Memory Deep Learning

no code implementations7 Jul 2023 Shamma Nasrin, Maeesha Binte Hashem, Nastaran Darabi, Benjamin Parpillon, Farah Fahim, Wilfred Gomes, Amit Ranjan Trivedi

We discuss various networking configurations among CiM arrays where Flash, SA, and their hybrid digitization steps can be efficiently implemented using the proposed memory-immersed scheme.

Cannot find the paper you are looking for? You can Submit a new open access paper.