Search Results for author: Mohammad Samragh

Found 15 papers, 1 papers with code

Weight subcloning: direct initialization of transformers using larger pretrained ones

no code implementations14 Dec 2023 Mohammad Samragh, Mehrdad Farajtabar, Sachin Mehta, Raviteja Vemulapalli, Fartash Faghri, Devang Naik, Oncel Tuzel, Mohammad Rastegari

The usual practice of transfer learning overcomes this challenge by initializing the model with weights of a pretrained model of the same size and specification to increase the convergence and training speed.

Image Classification Transfer Learning

Improving vision-inspired keyword spotting using dynamic module skipping in streaming conformer encoder

no code implementations31 Aug 2023 Alexandre Bittar, Paul Dixon, Mohammad Samragh, Kumari Nishu, Devang Naik

Using a vision-inspired keyword spotting framework, we propose an architecture with input-dependent dynamic depth capable of processing streaming audio.

Keyword Spotting

Trojan Signatures in DNN Weights

no code implementations7 Sep 2021 Greg Fields, Mohammad Samragh, Mojan Javaheripi, Farinaz Koushanfar, Tara Javidi

Deep neural networks have been shown to be vulnerable to backdoor, or trojan, attacks where an adversary has embedded a trigger in the network at training time such that the model correctly classifies all standard inputs, but generates a targeted, incorrect classification on any input which contains the trigger.

Unsupervised Information Obfuscation for Split Inference of Neural Networks

no code implementations23 Apr 2021 Mohammad Samragh, Hossein Hosseini, Aleksei Triastcyn, Kambiz Azarian, Joseph Soriaga, Farinaz Koushanfar

In our method, the edge device runs the model up to a split layer determined based on its computational capacity.

Private Split Inference of Deep Networks

no code implementations1 Jan 2021 Mohammad Samragh, Hossein Hosseini, Kambiz Azarian, Joseph Soriaga

Splitting network computations between the edge device and the cloud server is a promising approach for enabling low edge-compute and private inference of neural networks.

CLEANN: Accelerated Trojan Shield for Embedded Neural Networks

no code implementations4 Sep 2020 Mojan Javaheripi, Mohammad Samragh, Gregory Fields, Tara Javidi, Farinaz Koushanfar

We propose CLEANN, the first end-to-end framework that enables online mitigation of Trojans for embedded Deep Neural Network (DNN) applications.

Dictionary Learning

GeneCAI: Genetic Evolution for Acquiring Compact AI

no code implementations8 Apr 2020 Mojan Javaheripi, Mohammad Samragh, Tara Javidi, Farinaz Koushanfar

In the contemporary big data realm, Deep Neural Networks (DNNs) are evolving towards more complex architectures to achieve higher inference accuracy.

Model Compression

ASCAI: Adaptive Sampling for acquiring Compact AI

no code implementations15 Nov 2019 Mojan Javaheripi, Mohammad Samragh, Tara Javidi, Farinaz Koushanfar

This paper introduces ASCAI, a novel adaptive sampling methodology that can learn how to effectively compress Deep Neural Networks (DNNs) for accelerated inference on resource-constrained platforms.

Model Compression

CodeX: Bit-Flexible Encoding for Streaming-based FPGA Acceleration of DNNs

no code implementations17 Jan 2019 Mohammad Samragh, Mojan Javaheripi, Farinaz Koushanfar

CodeX incorporates nonlinear encoding to the computation flow of neural networks to save memory.

RAPIDNN: In-Memory Deep Neural Network Acceleration Framework

no code implementations15 Jun 2018 Mohsen Imani, Mohammad Samragh, Yeseong Kim, Saransh Gupta, Farinaz Koushanfar, Tajana Rosing

To enable in-memory processing, RAPIDNN reinterprets a DNN model and maps it into a specialized accelerator, which is designed using non-volatile memory blocks that model four fundamental DNN operations, i. e., multiplication, addition, activation functions, and pooling.

Clustering speech-recognition +3

ResBinNet: Residual Binary Neural Network

no code implementations ICLR 2018 Mohammad Ghasemzadeh, Mohammad Samragh, Farinaz Koushanfar

Recent efforts on training light-weight binary neural networks offer promising execution/memory efficiency.

Binarization

Towards Safe Deep Learning: Unsupervised Defense Against Generic Adversarial Attacks

no code implementations ICLR 2018 Bita Darvish Rouhani, Mohammad Samragh, Tara Javidi, Farinaz Koushanfar

We introduce a novel automated countermeasure called Parallel Checkpointing Learners (PCL) to thwart the potential adversarial attacks and significantly improve the reliability (safety) of a victim DL model.

ReBNet: Residual Binarized Neural Network

1 code implementation3 Nov 2017 Mohammad Ghasemzadeh, Mohammad Samragh, Farinaz Koushanfar

We show that the state-of-the-art methods for optimizing binary networks accuracy, significantly increase the implementation cost and complexity.

Binarization General Classification

DeepFense: Online Accelerated Defense Against Adversarial Deep Learning

no code implementations8 Sep 2017 Bita Darvish Rouhani, Mohammad Samragh, Mojan Javaheripi, Tara Javidi, Farinaz Koushanfar

Recent advances in adversarial Deep Learning (DL) have opened up a largely unexplored surface for malicious attacks jeopardizing the integrity of autonomous DL systems.

Cannot find the paper you are looking for? You can Submit a new open access paper.