Search Results for author: Fabio Carta

Found 3 papers, 2 papers with code

Using the IBM Analog In-Memory Hardware Acceleration Kit for Neural Network Training and Inference

1 code implementation18 Jul 2023 Manuel Le Gallo, Corey Lammie, Julian Buechel, Fabio Carta, Omobayode Fagbohungbe, Charles Mackin, Hsinyu Tsai, Vijay Narayanan, Abu Sebastian, Kaoutar El Maghraoui, Malte J. Rasch

In this tutorial, we provide a deep dive into how such adaptations can be achieved and evaluated using the recently released IBM Analog Hardware Acceleration Kit (AIHWKit), freely available at https://github. com/IBM/aihwkit.

Fast offset corrected in-memory training

no code implementations8 Mar 2023 Malte J. Rasch, Fabio Carta, Omebayode Fagbohungbe, Tayfun Gokmen

In-memory computing with resistive crossbar arrays has been suggested to accelerate deep-learning workloads in highly efficient manner.

A flexible and fast PyTorch toolkit for simulating training and inference on analog crossbar arrays

1 code implementation5 Apr 2021 Malte J. Rasch, Diego Moreda, Tayfun Gokmen, Manuel Le Gallo, Fabio Carta, Cindy Goldberg, Kaoutar El Maghraoui, Abu Sebastian, Vijay Narayanan

We introduce the IBM Analog Hardware Acceleration Kit, a new and first of a kind open source toolkit to simulate analog crossbar arrays in a convenient fashion from within PyTorch (freely available at https://github. com/IBM/aihwkit).

Cannot find the paper you are looking for? You can Submit a new open access paper.