Search Results for author: Malte Rasch

Found 3 papers, 1 papers with code

AnalogNAS: A Neural Network Design Framework for Accurate Inference with Analog In-Memory Computing

1 code implementation17 May 2023 Hadjer Benmeziane, Corey Lammie, Irem Boybat, Malte Rasch, Manuel Le Gallo, Hsinyu Tsai, Ramachandran Muralidhar, Smail Niar, Ouarnoughi Hamza, Vijay Narayanan, Abu Sebastian, Kaoutar El Maghraoui

Digital processors based on typical von Neumann architectures are not conducive to edge AI given the large amounts of required data movement in and out of memory.

Zero-shifting Technique for Deep Neural Network Training on Resistive Cross-point Arrays

no code implementations24 Jul 2019 Hyungjun Kim, Malte Rasch, Tayfun Gokmen, Takashi Ando, Hiroyuki Miyazoe, Jae-Joon Kim, John Rozen, Seyoung Kim

By using this zero-shifting method, we show that network performance dramatically improves for imbalanced synapse devices.

Training LSTM Networks with Resistive Cross-Point Devices

no code implementations1 Jun 2018 Tayfun Gokmen, Malte Rasch, Wilfried Haensch

In our previous work we have shown that resistive cross point devices, so called Resistive Processing Unit (RPU) devices, can provide significant power and speed benefits when training deep fully connected networks as well as convolutional neural networks.

Cannot find the paper you are looking for? You can Submit a new open access paper.