no code implementations • 18 Jun 2024 • Zhaoxian Wu, Tayfun Gokmen, Malte J. Rasch, Tianyi Chen
Given the high economic and environmental costs of using large vision or language models, analog in-memory accelerators present a promising solution for energy-efficient AI.
no code implementations • 8 Mar 2023 • Malte J. Rasch, Fabio Carta, Omebayode Fagbohungbe, Tayfun Gokmen
In-memory computing with resistive crossbar arrays has been suggested to accelerate deep-learning workloads in highly efficient manner.
no code implementations • 31 Jan 2022 • Murat Onen, Tayfun Gokmen, Teodor K. Todorov, Tomasz Nowicki, Jesus A. del Alamo, John Rozen, Wilfried Haensch, Seyoung Kim
Analog crossbar arrays comprising programmable nonvolatile resistors are under intense investigation for acceleration of deep neural network training.
1 code implementation • 5 Apr 2021 • Malte J. Rasch, Diego Moreda, Tayfun Gokmen, Manuel Le Gallo, Fabio Carta, Cindy Goldberg, Kaoutar El Maghraoui, Abu Sebastian, Vijay Narayanan
We introduce the IBM Analog Hardware Acceleration Kit, a new and first of a kind open source toolkit to simulate analog crossbar arrays in a convenient fashion from within PyTorch (freely available at https://github. com/IBM/aihwkit).
no code implementations • 17 Sep 2019 • Tayfun Gokmen, Wilfried Haensch
Hardware architectures composed of resistive cross-point device arrays can provide significant power and speed benefits for deep neural network training workloads using stochastic gradient descent (SGD) and backpropagation (BP) algorithm.
no code implementations • 24 Jul 2019 • Hyungjun Kim, Malte Rasch, Tayfun Gokmen, Takashi Ando, Hiroyuki Miyazoe, Jae-Joon Kim, John Rozen, Seyoung Kim
By using this zero-shifting method, we show that network performance dramatically improves for imbalanced synapse devices.
no code implementations • 6 Jun 2019 • Malte J. Rasch, Tayfun Gokmen, Wilfried Haensch
Accelerating training of artificial neural networks (ANN) with analog resistive crossbar arrays is a promising idea.
no code implementations • 3 Jul 2018 • Malte J. Rasch, Tayfun Gokmen, Mattia Rigotti, Wilfried Haensch
Analog arrays are a promising upcoming hardware technology with the potential to drastically speed up deep learning.
no code implementations • 1 Jun 2018 • Tayfun Gokmen, Malte Rasch, Wilfried Haensch
In our previous work we have shown that resistive cross point devices, so called Resistive Processing Unit (RPU) devices, can provide significant power and speed benefits when training deep fully connected networks as well as convolutional neural networks.
no code implementations • 20 Jun 2017 • Seyoung Kim, Tayfun Gokmen, Hyung-Min Lee, Wilfried E. Haensch
Recently we have shown that an architecture based on resistive processing unit (RPU) devices has potential to achieve significant acceleration in deep neural network (DNN) training compared to today's software-based DNN implementations running on CPU/GPU.
no code implementations • 22 May 2017 • Tayfun Gokmen, O. Murat Onen, Wilfried Haensch
This concept of Resistive Processing Unit (RPU) devices we extend here towards convolutional neural networks (CNNs).
no code implementations • 23 Mar 2016 • Tayfun Gokmen, Yurii Vlasov
In recent years, deep neural networks (DNN) have demonstrated significant business impact in large scale analysis and classification tasks such as speech recognition, visual object detection, pattern extraction, etc.