Search Results for author: Rachmad Vidya Wicaksana Putra

Found 21 papers, 0 papers with code

SNN4Agents: A Framework for Developing Energy-Efficient Embodied Spiking Neural Networks for Autonomous Agents

no code implementations14 Apr 2024 Rachmad Vidya Wicaksana Putra, Alberto Marchisio, Muhammad Shafique

The experimental results show that our proposed framework can maintain high accuracy (i. e., 84. 12% accuracy) with 68. 75% memory saving, 3. 58x speed-up, and 4. 03x energy efficiency improvement as compared to the state-of-the-art work for NCARS dataset, thereby enabling energy-efficient embodied SNN deployments for autonomous agents.

Quantization

A Methodology to Study the Impact of Spiking Neural Network Parameters considering Event-Based Automotive Data

no code implementations4 Apr 2024 Iqra Bano, Rachmad Vidya Wicaksana Putra, Alberto Marchisio, Muhammad Shafique

Toward this, we propose a novel methodology to systematically study and analyze the impact of SNN parameters considering event-based automotive data, then leverage this analysis for enhancing SNN developments.

Autonomous Driving Image Classification +2

Embodied Neuromorphic Artificial Intelligence for Robotics: Perspectives, Challenges, and Research Development Stack

no code implementations4 Apr 2024 Rachmad Vidya Wicaksana Putra, Alberto Marchisio, Fakhreddine Zayer, Jorge Dias, Muhammad Shafique

Toward this, recent advances in neuromorphic computing with Spiking Neural Networks (SNN) have demonstrated the potential to enable the embodied intelligence for robotics through bio-plausible computing paradigm that mimics how the biological brain works, known as "neuromorphic artificial intelligence (AI)".

A Methodology for Improving Accuracy of Embedded Spiking Neural Networks through Kernel Size Scaling

no code implementations2 Apr 2024 Rachmad Vidya Wicaksana Putra, Muhammad Shafique

Spiking Neural Networks (SNNs) can offer ultra low power/ energy consumption for machine learning-based applications due to their sparse spike-based operations.

Model Selection

SpikeNAS: A Fast Memory-Aware Neural Architecture Search Framework for Spiking Neural Network-based Autonomous Agents

no code implementations17 Feb 2024 Rachmad Vidya Wicaksana Putra, Muhammad Shafique

Autonomous mobile agents (e. g., UAVs and UGVs) are typically expected to incur low power/energy consumption for solving machine learning tasks (such as object recognition), as these mobile agents are usually powered by portable batteries.

Neural Architecture Search Object Recognition

RescueSNN: Enabling Reliable Executions on Spiking Neural Network Accelerators under Permanent Faults

no code implementations8 Apr 2023 Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique

Our FAM technique leverages the fault map of SNN compute engine for (i) minimizing weight corruption when mapping weight bits on the faulty memory cells, and (ii) selectively employing faulty neurons that do not cause significant accuracy degradation to maintain accuracy and throughput, while considering the SNN operations and processing dataflow.

EnforceSNN: Enabling Resilient and Energy-Efficient Spiking Neural Network Inference considering Approximate DRAMs for Embedded Systems

no code implementations8 Apr 2023 Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique

The key mechanisms of our EnforceSNN are: (1) employing quantized weights to reduce the DRAM access energy; (2) devising an efficient DRAM mapping policy to minimize the DRAM energy-per-access; (3) analyzing the SNN error tolerance to understand its accuracy profile considering different bit error rate (BER) values; (4) leveraging the information for developing an efficient fault-aware training (FAT) that considers different BER values and bit error locations in DRAM to improve the SNN error tolerance; and (5) developing an algorithm to select the SNN model that offers good trade-offs among accuracy, memory, and energy consumption.

TopSpark: A Timestep Optimization Methodology for Energy-Efficient Spiking Neural Networks on Autonomous Mobile Agents

no code implementations3 Mar 2023 Rachmad Vidya Wicaksana Putra, Muhammad Shafique

These requirements can be fulfilled by Spiking Neural Networks (SNNs) as they offer low power/energy processing due to their sparse computations and efficient online learning with bio-inspired learning mechanisms for adapting to different environments.

Mantis: Enabling Energy-Efficient Autonomous Mobile Agents with Spiking Neural Networks

no code implementations24 Dec 2022 Rachmad Vidya Wicaksana Putra, Muhammad Shafique

Towards this, we propose a Mantis methodology to systematically employ SNNs on autonomous mobile agents to enable energy-efficient processing and adaptive capabilities in dynamic environments.

Model Selection

tinySNN: Towards Memory- and Energy-Efficient Spiking Neural Networks

no code implementations17 Jun 2022 Rachmad Vidya Wicaksana Putra, Muhammad Shafique

Larger Spiking Neural Network (SNN) models are typically favorable as they can offer higher accuracy.

Quantization

lpSpikeCon: Enabling Low-Precision Spiking Neural Network Processing for Efficient Unsupervised Continual Learning on Autonomous Agents

no code implementations24 May 2022 Rachmad Vidya Wicaksana Putra, Muhammad Shafique

Our lpSpikeCon methodology employs the following key steps: (1) analyzing the impacts of training the SNN model under unsupervised continual learning settings with reduced weight precision on the inference accuracy; (2) leveraging this study to identify SNN parameters that have a significant impact on the inference accuracy; and (3) developing an algorithm for searching the respective SNN parameter values that improve the quality of unsupervised continual learning.

Continual Learning

SoftSNN: Low-Cost Fault Tolerance for Spiking Neural Network Accelerators under Soft Errors

no code implementations10 Mar 2022 Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique

These errors can change the weight values and neuron operations in the compute engine of SNN accelerators, thereby leading to incorrect outputs and accuracy degradation.

Towards Energy-Efficient and Secure Edge AI: A Cross-Layer Framework

no code implementations20 Sep 2021 Muhammad Shafique, Alberto Marchisio, Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif

Afterward, we discuss how to further improve the performance (latency) and the energy efficiency of Edge AI systems through HW/SW-level optimizations, such as pruning, quantization, and approximation.

Quantization

ReSpawn: Energy-Efficient Fault-Tolerance for Spiking Neural Networks considering Unreliable Memories

no code implementations23 Aug 2021 Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique

Since recent works still focus on the fault-modeling and random fault injection in SNNs, the impact of memory faults in SNN hardware architectures on accuracy and the respective fault-mitigation techniques are not thoroughly explored.

Q-SpiNN: A Framework for Quantizing Spiking Neural Networks

no code implementations5 Jul 2021 Rachmad Vidya Wicaksana Putra, Muhammad Shafique

A prominent technique for reducing the memory footprint of Spiking Neural Networks (SNNs) without decreasing the accuracy significantly is quantization.

Quantization

SpikeDyn: A Framework for Energy-Efficient Spiking Neural Networks with Continual and Unsupervised Learning Capabilities in Dynamic Environments

no code implementations28 Feb 2021 Rachmad Vidya Wicaksana Putra, Muhammad Shafique

Spiking Neural Networks (SNNs) bear the potential of efficient unsupervised and continual learning capabilities because of their biological plausibility, but their complexity still poses a serious research challenge to enable their energy-efficient design for resource-constrained scenarios (like embedded systems, IoT-Edge, etc.).

Avg Continual Learning

SparkXD: A Framework for Resilient and Energy-Efficient Spiking Neural Network Inference using Approximate DRAM

no code implementations28 Feb 2021 Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique

The key mechanisms of SparkXD are: (1) improving the SNN error tolerance through fault-aware training that considers bit errors from approximate DRAM, (2) analyzing the error tolerance of the improved SNN model to find the maximum tolerable bit error rate (BER) that meets the targeted accuracy constraint, and (3) energy-efficient DRAM data mapping for the resilient SNN model that maps the weights in the appropriate DRAM location to minimize the DRAM access energy.

FSpiNN: An Optimization Framework for Memory- and Energy-Efficient Spiking Neural Networks

no code implementations17 Jul 2020 Rachmad Vidya Wicaksana Putra, Muhammad Shafique

FSpiNN reduces the computational requirements by reducing the number of neuronal operations, the STDP-based synaptic weight updates, and the STDP complexity.

Quantization

DRMap: A Generic DRAM Data Mapping Policy for Energy-Efficient Processing of Convolutional Neural Networks

no code implementations21 Apr 2020 Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique

Many convolutional neural network (CNN) accelerators face performance- and energy-efficiency challenges which are crucial for embedded implementations, due to high DRAM access latency and energy.

ROMANet: Fine-Grained Reuse-Driven Off-Chip Memory Access Management and Data Organization for Deep Neural Network Accelerators

no code implementations4 Feb 2019 Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique

Our experimental results show that the ROMANet saves DRAM access energy by 12% for the AlexNet, by 36% for the VGG-16, and by 46% for the MobileNet, while also improving the DRAM throughput by 10%, as compared to the state-of-the-art.

Management Scheduling

MPNA: A Massively-Parallel Neural Array Accelerator with Dataflow Optimization for Convolutional Neural Networks

no code implementations30 Oct 2018 Muhammad Abdullah Hanif, Rachmad Vidya Wicaksana Putra, Muhammad Tanvir, Rehan Hafiz, Semeen Rehman, Muhammad Shafique

The state-of-the-art accelerators for Convolutional Neural Networks (CNNs) typically focus on accelerating only the convolutional layers, but do not prioritize the fully-connected layers much.

Cannot find the paper you are looking for? You can Submit a new open access paper.