no code implementations • 14 Sep 2020 • Philip Colangelo, Oren Segal, Alex Speicher, Martin Margala
In this work, we develop and test a general multilayer perceptron (MLP) flow that can take arbitrary datasets as input and automatically produce optimized NNAs and hardware designs.
no code implementations • 6 Mar 2019 • Philip Colangelo, Oren Segal, Alexander Speicher, Martin Margala
Mathematical theory shows us that multilayer feedforward Artificial Neural Networks(ANNs) are universal function approximators, capable of approximating any measurable function to any desired degree of accuracy.
no code implementations • 12 Jun 2018 • Philip Colangelo, Nasibeh Nasiri, Asit Mishra, Eriko Nurvitadhi, Martin Margala, Kevin Nealis
This results in a trade-off between throughput and accuracy and can be tailored for different networks through various combinations of activation and weight data widths.
Distributed, Parallel, and Cluster Computing Hardware Architecture