no code implementations • 3 Sep 2021 • T. Patrick Xiao, Ben Feinberg, Christopher H. Bennett, Venkatraman Prabhakar, Prashant Saxena, Vineet Agrawal, Sapan Agarwal, Matthew J. Marinella
Specialized accelerators have recently garnered attention as a method to reduce the power consumption of neural network inference.
no code implementations • 2 Apr 2020 • Christopher H. Bennett, T. Patrick Xiao, Ryan Dellana, Vineet Agrawal, Ben Feinberg, Venkatraman Prabhakar, Krishnaswamy Ramkumar, Long Hinh, Swatilekha Saha, Vijay Raghavan, Ramesh Chettuvetty, Sapan Agarwal, Matthew J. Marinella
Non-volatile memory arrays can deploy pre-trained neural network models for edge inference.
no code implementations • 25 Feb 2020 • Christopher H. Bennett, Ryan Dellana, T. Patrick Xiao, Ben Feinberg, Sapan Agarwal, Suma Cardwell, Matthew J. Marinella, William Severa, Brad Aimone
Neuromorphic-style inference only works well if limited hardware resources are maximized properly, e. g. accuracy continues to scale with parameters and complexity in the face of potential disturbance.