Search Results for author: Ismail Akturk

Found 5 papers, 0 papers with code

Zero-Shot RTL Code Generation with Attention Sink Augmented Large Language Models

no code implementations12 Jan 2024 Selim Sandal, Ismail Akturk

The design and optimization of hardware have traditionally been resource-intensive, demanding considerable expertise and dependence on established design automation tools.

Code Generation

Bio-realistic Neural Network Implementation on Loihi 2 with Izhikevich Neurons

no code implementations21 Jul 2023 Recep Buğra Uludağ, Serhat Çağdaş, Yavuz Selim İşler, Neslihan Serap Şengör, Ismail Akturk

In this paper, we presented a bio-realistic basal ganglia neural network and its integration into Intel's Loihi neuromorphic processor to perform simple Go/No-Go task.

Attack-Centric Approach for Evaluating Transferability of Adversarial Samples in Machine Learning Models

no code implementations3 Dec 2021 Tochukwu Idika, Ismail Akturk

We analyzed the behavior of adversarial samples on victim models and outlined four factors that can influence the transferability of adversarial samples.

BIG-bench Machine Learning

On Value Recomputation to Accelerate Invisible Speculation

no code implementations22 Feb 2021 Christos Sakalis, Zamshed I. Chowdhury, Shayne Wadle, Ismail Akturk, Alberto Ros, Magnus Själander, Stefanos Kaxiras, Ulya R. Karpuzcu

In this paper, our insight is that we can achieve the same goal as VP (increasing performance by providing the value of loads that miss) without incurring its negative side-effect (delaying the release of precious resources), if we can safely, non-speculatively, recompute a value in isolation (without being seen from the outside), so that we do not expose any information by transferring such a value via the memory hierarchy.

Value prediction Hardware Architecture

Weight Update Skipping: Reducing Training Time for Artificial Neural Networks

no code implementations5 Dec 2020 Pooneh Safayenikoo, Ismail Akturk

In this paper, we propose a new training methodology for ANNs that exploits the observation of improvement of accuracy shows temporal variations which allow us to skip updating weights when the variation is minuscule.

Cannot find the paper you are looking for? You can Submit a new open access paper.