no code implementations • 3 Jun 2022 • Wilfried Haensch, Anand Raghunathan, Kaushik Roy, Bhaswar Chakrabart, Charudatta M. Phatak, Cheng Wang, Supratik Guha
In the second part, we review what is knows about the different new non-volatile memory materials and devices suited for compute in-memory, and discuss the outlook and challenges.
no code implementations • 31 Jan 2022 • Murat Onen, Tayfun Gokmen, Teodor K. Todorov, Tomasz Nowicki, Jesus A. del Alamo, John Rozen, Wilfried Haensch, Seyoung Kim
Analog crossbar arrays comprising programmable nonvolatile resistors are under intense investigation for acceleration of deep neural network training.
no code implementations • 17 Sep 2019 • Tayfun Gokmen, Wilfried Haensch
Hardware architectures composed of resistive cross-point device arrays can provide significant power and speed benefits for deep neural network training workloads using stochastic gradient descent (SGD) and backpropagation (BP) algorithm.
no code implementations • 6 Jun 2019 • Malte J. Rasch, Tayfun Gokmen, Wilfried Haensch
Accelerating training of artificial neural networks (ANN) with analog resistive crossbar arrays is a promising idea.
no code implementations • 3 Jul 2018 • Malte J. Rasch, Tayfun Gokmen, Mattia Rigotti, Wilfried Haensch
Analog arrays are a promising upcoming hardware technology with the potential to drastically speed up deep learning.
no code implementations • 1 Jun 2018 • Tayfun Gokmen, Malte Rasch, Wilfried Haensch
In our previous work we have shown that resistive cross point devices, so called Resistive Processing Unit (RPU) devices, can provide significant power and speed benefits when training deep fully connected networks as well as convolutional neural networks.
no code implementations • 22 May 2017 • Tayfun Gokmen, O. Murat Onen, Wilfried Haensch
This concept of Resistive Processing Unit (RPU) devices we extend here towards convolutional neural networks (CNNs).