Search Results for author: Tayfun Gokmen

Found 13 papers, 2 papers with code

Pipeline Gradient-based Model Training on Analog In-memory Accelerators

1 code implementation19 Oct 2024 Zhaoxian Wu, Quan Xiao, Tayfun Gokmen, Hsinyu Tsai, Kaoutar El Maghraoui, Tianyi Chen

Aiming to accelerate the training of large deep neural models (DNN) in an energy-efficient way, an analog in-memory computing (AIMC) accelerator emerges as a solution with immense potential.

Towards Exact Gradient-based Training on Analog In-memory Computing

no code implementations18 Jun 2024 Zhaoxian Wu, Tayfun Gokmen, Malte J. Rasch, Tianyi Chen

Given the high economic and environmental costs of using large vision or language models, analog in-memory accelerators present a promising solution for energy-efficient AI.

Fast offset corrected in-memory training

no code implementations8 Mar 2023 Malte J. Rasch, Fabio Carta, Omebayode Fagbohungbe, Tayfun Gokmen

In-memory computing with resistive crossbar arrays has been suggested to accelerate deep-learning workloads in highly efficient manner.

Neural Network Training with Asymmetric Crosspoint Elements

no code implementations31 Jan 2022 Murat Onen, Tayfun Gokmen, Teodor K. Todorov, Tomasz Nowicki, Jesus A. del Alamo, John Rozen, Wilfried Haensch, Seyoung Kim

Analog crossbar arrays comprising programmable nonvolatile resistors are under intense investigation for acceleration of deep neural network training.

A flexible and fast PyTorch toolkit for simulating training and inference on analog crossbar arrays

1 code implementation5 Apr 2021 Malte J. Rasch, Diego Moreda, Tayfun Gokmen, Manuel Le Gallo, Fabio Carta, Cindy Goldberg, Kaoutar El Maghraoui, Abu Sebastian, Vijay Narayanan

We introduce the IBM Analog Hardware Acceleration Kit, a new and first of a kind open source toolkit to simulate analog crossbar arrays in a convenient fashion from within PyTorch (freely available at https://github. com/IBM/aihwkit).

Algorithm for Training Neural Networks on Resistive Device Arrays

no code implementations17 Sep 2019 Tayfun Gokmen, Wilfried Haensch

Hardware architectures composed of resistive cross-point device arrays can provide significant power and speed benefits for deep neural network training workloads using stochastic gradient descent (SGD) and backpropagation (BP) algorithm.

Zero-shifting Technique for Deep Neural Network Training on Resistive Cross-point Arrays

no code implementations24 Jul 2019 Hyungjun Kim, Malte Rasch, Tayfun Gokmen, Takashi Ando, Hiroyuki Miyazoe, Jae-Joon Kim, John Rozen, Seyoung Kim

By using this zero-shifting method, we show that network performance dramatically improves for imbalanced synapse devices.

Training large-scale ANNs on simulated resistive crossbar arrays

no code implementations6 Jun 2019 Malte J. Rasch, Tayfun Gokmen, Wilfried Haensch

Accelerating training of artificial neural networks (ANN) with analog resistive crossbar arrays is a promising idea.

Efficient ConvNets for Analog Arrays

no code implementations3 Jul 2018 Malte J. Rasch, Tayfun Gokmen, Mattia Rigotti, Wilfried Haensch

Analog arrays are a promising upcoming hardware technology with the potential to drastically speed up deep learning.

Training LSTM Networks with Resistive Cross-Point Devices

no code implementations1 Jun 2018 Tayfun Gokmen, Malte Rasch, Wilfried Haensch

In our previous work we have shown that resistive cross point devices, so called Resistive Processing Unit (RPU) devices, can provide significant power and speed benefits when training deep fully connected networks as well as convolutional neural networks.

Analog CMOS-based Resistive Processing Unit for Deep Neural Network Training

no code implementations20 Jun 2017 Seyoung Kim, Tayfun Gokmen, Hyung-Min Lee, Wilfried E. Haensch

Recently we have shown that an architecture based on resistive processing unit (RPU) devices has potential to achieve significant acceleration in deep neural network (DNN) training compared to today's software-based DNN implementations running on CPU/GPU.

Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices

no code implementations22 May 2017 Tayfun Gokmen, O. Murat Onen, Wilfried Haensch

This concept of Resistive Processing Unit (RPU) devices we extend here towards convolutional neural networks (CNNs).

Management

Acceleration of Deep Neural Network Training with Resistive Cross-Point Devices

no code implementations23 Mar 2016 Tayfun Gokmen, Yurii Vlasov

In recent years, deep neural networks (DNN) have demonstrated significant business impact in large scale analysis and classification tasks such as speech recognition, visual object detection, pattern extraction, etc.

Data Integration object-detection +3

Cannot find the paper you are looking for? You can Submit a new open access paper.