no code implementations • 13 Jan 2023 • Patrick Bowen, Guy Regev, Nir Regev, Bruno Pedroni, Edward Hanson, Yiran Chen
This paper presents an analysis of the fundamental limits on energy efficiency in both digital and analog in-memory computing architectures, and compares their performance to single instruction, single data (scalar) machines specifically in the context of machine inference.
no code implementations • 15 Jun 2017 • Hesham Mostafa, Bruno Pedroni, Sadique Sheik, Gert Cauwenberghs
In this paper, we describe a hardware-efficient on-line learning technique for feedforward multi-layer ANNs that is based on pipelined backpropagation.
no code implementations • 5 Nov 2013 • Emre Neftci, Srinjoy Das, Bruno Pedroni, Kenneth Kreutz-Delgado, Gert Cauwenberghs
However the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate.