Search Results for author: Wilfried Haensch

Found 7 papers, 0 papers with code

A Co-design view of Compute in-Memory with Non-Volatile Elements for Neural Networks

no code implementations3 Jun 2022 Wilfried Haensch, Anand Raghunathan, Kaushik Roy, Bhaswar Chakrabart, Charudatta M. Phatak, Cheng Wang, Supratik Guha

In the second part, we review what is knows about the different new non-volatile memory materials and devices suited for compute in-memory, and discuss the outlook and challenges.

Neural Network Training with Asymmetric Crosspoint Elements

no code implementations31 Jan 2022 Murat Onen, Tayfun Gokmen, Teodor K. Todorov, Tomasz Nowicki, Jesus A. del Alamo, John Rozen, Wilfried Haensch, Seyoung Kim

Analog crossbar arrays comprising programmable nonvolatile resistors are under intense investigation for acceleration of deep neural network training.

Total Energy

Algorithm for Training Neural Networks on Resistive Device Arrays

no code implementations17 Sep 2019 Tayfun Gokmen, Wilfried Haensch

Hardware architectures composed of resistive cross-point device arrays can provide significant power and speed benefits for deep neural network training workloads using stochastic gradient descent (SGD) and backpropagation (BP) algorithm.

Training large-scale ANNs on simulated resistive crossbar arrays

no code implementations6 Jun 2019 Malte J. Rasch, Tayfun Gokmen, Wilfried Haensch

Accelerating training of artificial neural networks (ANN) with analog resistive crossbar arrays is a promising idea.

Efficient ConvNets for Analog Arrays

no code implementations3 Jul 2018 Malte J. Rasch, Tayfun Gokmen, Mattia Rigotti, Wilfried Haensch

Analog arrays are a promising upcoming hardware technology with the potential to drastically speed up deep learning.

Training LSTM Networks with Resistive Cross-Point Devices

no code implementations1 Jun 2018 Tayfun Gokmen, Malte Rasch, Wilfried Haensch

In our previous work we have shown that resistive cross point devices, so called Resistive Processing Unit (RPU) devices, can provide significant power and speed benefits when training deep fully connected networks as well as convolutional neural networks.

Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices

no code implementations22 May 2017 Tayfun Gokmen, O. Murat Onen, Wilfried Haensch

This concept of Resistive Processing Unit (RPU) devices we extend here towards convolutional neural networks (CNNs).

Management

Cannot find the paper you are looking for? You can Submit a new open access paper.