Search Results for author: Maksim Jenihhin

Found 12 papers, 0 papers with code

SAFFIRA: a Framework for Assessing the Reliability of Systolic-Array-Based DNN Accelerators

no code implementations5 Mar 2024 Mahdi Taheri, Masoud Daneshtalab, Jaan Raik, Maksim Jenihhin, Salvatore Pappalardo, Paul Jimenez, Bastien Deveautour, Alberto Bosio

Systolic array has emerged as a prominent architecture for Deep Neural Network (DNN) hardware accelerators, providing high-throughput and low-latency performance essential for deploying DNNs across diverse applications.

AdAM: Adaptive Fault-Tolerant Approximate Multiplier for Edge DNN Accelerators

no code implementations5 Mar 2024 Mahdi Taheri, Natalia Cherezova, Samira Nazari, Ahsan Rafiq, Ali Azarpeyvand, Tara Ghasempouri, Masoud Daneshtalab, Jaan Raik, Maksim Jenihhin

In this paper, we propose an architecture of a novel adaptive fault-tolerant approximate multiplier tailored for ASIC-based DNN accelerators.

Exploration of Activation Fault Reliability in Quantized Systolic Array-Based DNN Accelerators

no code implementations17 Jan 2024 Mahdi Taheri, Natalia Cherezova, Mohammad Saeed Ansari, Maksim Jenihhin, Ali Mahani, Masoud Daneshtalab, Jaan Raik

The stringent requirements for the Deep Neural Networks (DNNs) accelerator's reliability stand along with the need for reducing the computational burden on the hardware platforms, i. e. reducing the energy consumption and execution time as well as increasing the efficiency of DNN accelerators.

Quantization

Enhancing Fault Resilience of QNNs by Selective Neuron Splitting

no code implementations16 Jun 2023 Mohammad Hasan Ahmadilivani, Mahdi Taheri, Jaan Raik, Masoud Daneshtalab, Maksim Jenihhin

Thereafter, a novel method for splitting the critical neurons is proposed that enables the design of a Lightweight Correction Unit (LCU) in the accelerator without redesigning its computational part.

APPRAISER: DNN Fault Resilience Analysis Employing Approximation Errors

no code implementations31 May 2023 Mahdi Taheri, Mohammad Hasan Ahmadilivani, Maksim Jenihhin, Masoud Daneshtalab, Jaan Raik

Nowadays, the extensive exploitation of Deep Neural Networks (DNNs) in safety-critical applications raises new reliability concerns.

A Novel Fault-Tolerant Logic Style with Self-Checking Capability

no code implementations31 May 2023 Mahdi Taheri, Saeideh Sheikhpour, Ali Mahani, Maksim Jenihhin

We introduce a novel logic style with self-checking capability to enhance hardware reliability at logic level.

A Systematic Literature Review on Hardware Reliability Assessment Methods for Deep Neural Networks

no code implementations9 May 2023 Mohammad Hasan Ahmadilivani, Mahdi Taheri, Jaan Raik, Masoud Daneshtalab, Maksim Jenihhin

Through this SLR, three kinds of methods for reliability assessment of DNNs are identified including Fault Injection (FI), Analytical, and Hybrid methods.

DeepAxe: A Framework for Exploration of Approximation and Reliability Trade-offs in DNN Accelerators

no code implementations14 Mar 2023 Mahdi Taheri, Mohammad Riazati, Mohammad Hasan Ahmadilivani, Maksim Jenihhin, Masoud Daneshtalab, Jaan Raik, Mikael Sjodin, Bjorn Lisper

The framework enables selective approximation of reliability-critical DNNs, providing a set of Pareto-optimal DNN implementation design space points for the target resource utilization requirements.

DeepVigor: Vulnerability Value Ranges and Factors for DNNs' Reliability Assessment

no code implementations13 Mar 2023 Mohammad Hasan Ahmadilivani, Mahdi Taheri, Jaan Raik, Masoud Daneshtalab, Maksim Jenihhin

In this work, we propose a novel accurate, fine-grain, metric-oriented, and accelerator-agnostic method called DeepVigor that provides vulnerability value ranges for DNN neurons' outputs.

Unsupervised Recycled FPGA Detection Using Symmetry Analysis

no code implementations3 Mar 2023 Tanvir Ahmad Tarique, Foisal Ahmed, Maksim Jenihhin, Liakot Ali

Recently, recycled field-programmable gate arrays (FPGAs) pose a significant hardware security problem due to the proliferation of the semiconductor supply chain.

Density Ratio Estimation Unsupervised Anomaly Detection

Modeling Gate-Level Abstraction Hierarchy Using Graph Convolutional Neural Networks to Predict Functional De-Rating Factors

no code implementations5 Apr 2021 Aneesh Balakrishnan, Thomas Lange, Maximilien Glorieux, Dan Alexandrescu, Maksim Jenihhin

In the preliminary phase of the work, the important goal is making a GCN which able to take a gate-level netlist as input information after transforming it into the Probabilistic Bayesian Graph in the form of Graph Modeling Language (GML).

Cannot find the paper you are looking for? You can Submit a new open access paper.