Search Results for author: Hajar Falahati

Found 2 papers, 0 papers with code

ORIGAMI: A Heterogeneous Split Architecture for In-Memory Acceleration of Learning

no code implementations30 Dec 2018 Hajar Falahati, Pejman Lotfi-Kamran, Mohammad Sadrosadati, Hamid Sarbazi-Azad

To utilize available bandwidth without violating area and power budgets of logic layer, ORIGAMI comes with a computation-splitting compiler that divides an ML algorithm between in-memory accelerators and an out-of-the-memory platform in a balanced way and with minimum inter-communications.

GANAX: A Unified MIMD-SIMD Acceleration for Generative Adversarial Networks

no code implementations10 May 2018 Amir Yazdanbakhsh, Hajar Falahati, Philip J. Wolfe, Kambiz Samadi, Nam Sung Kim, Hadi Esmaeilzadeh

Even though there is a convolution stage in this operator, the inserted zeros lead to underutilization of the compute resources when a conventional convolution accelerator is employed.

Cannot find the paper you are looking for? You can Submit a new open access paper.