Search Results for author: Malte J. Rasch

Found 6 papers, 2 papers with code

Using the IBM Analog In-Memory Hardware Acceleration Kit for Neural Network Training and Inference

1 code implementation18 Jul 2023 Manuel Le Gallo, Corey Lammie, Julian Buechel, Fabio Carta, Omobayode Fagbohungbe, Charles Mackin, Hsinyu Tsai, Vijay Narayanan, Abu Sebastian, Kaoutar El Maghraoui, Malte J. Rasch

In this tutorial, we provide a deep dive into how such adaptations can be achieved and evaluated using the recently released IBM Analog Hardware Acceleration Kit (AIHWKit), freely available at https://github. com/IBM/aihwkit.

Fast offset corrected in-memory training

no code implementations8 Mar 2023 Malte J. Rasch, Fabio Carta, Omebayode Fagbohungbe, Tayfun Gokmen

In-memory computing with resistive crossbar arrays has been suggested to accelerate deep-learning workloads in highly efficient manner.

Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators

no code implementations16 Feb 2023 Malte J. Rasch, Charles Mackin, Manuel Le Gallo, An Chen, Andrea Fasoli, Frederic Odermatt, Ning li, S. R. Nandakumar, Pritish Narayanan, Hsinyu Tsai, Geoffrey W. Burr, Abu Sebastian, Vijay Narayanan

Analog in-memory computing (AIMC) -- a promising approach for energy-efficient acceleration of deep learning workloads -- computes matrix-vector multiplications (MVMs) but only approximately, due to nonidealities that often are non-deterministic or nonlinear.

A flexible and fast PyTorch toolkit for simulating training and inference on analog crossbar arrays

1 code implementation5 Apr 2021 Malte J. Rasch, Diego Moreda, Tayfun Gokmen, Manuel Le Gallo, Fabio Carta, Cindy Goldberg, Kaoutar El Maghraoui, Abu Sebastian, Vijay Narayanan

We introduce the IBM Analog Hardware Acceleration Kit, a new and first of a kind open source toolkit to simulate analog crossbar arrays in a convenient fashion from within PyTorch (freely available at https://github. com/IBM/aihwkit).

Training large-scale ANNs on simulated resistive crossbar arrays

no code implementations6 Jun 2019 Malte J. Rasch, Tayfun Gokmen, Wilfried Haensch

Accelerating training of artificial neural networks (ANN) with analog resistive crossbar arrays is a promising idea.

Efficient ConvNets for Analog Arrays

no code implementations3 Jul 2018 Malte J. Rasch, Tayfun Gokmen, Mattia Rigotti, Wilfried Haensch

Analog arrays are a promising upcoming hardware technology with the potential to drastically speed up deep learning.

Cannot find the paper you are looking for? You can Submit a new open access paper.