Search Results for author: Farinaz Koushanfar

Found 63 papers, 16 papers with code

RankMap: A Platform-Aware Framework for Distributed Learning from Dense Datasets

1 code implementation27 Mar 2015 Azalia Mirhoseini, Eva L. Dyer, Ebrahim. M. Songhori, Richard G. Baraniuk, Farinaz Koushanfar

This paper introduces RankMap, a platform-aware end-to-end framework for efficient execution of a broad class of iterative learning algorithms for massive and dense datasets.

Distributed Computing Scheduling

Sub-Linear Privacy-Preserving Near-Neighbor Search

no code implementations6 Dec 2016 M. Sadegh Riazi, Beidi Chen, Anshumali Shrivastava, Dan Wallach, Farinaz Koushanfar

In Near-Neighbor Search (NNS), a new client queries a database (held by a server) for the most similar data (near-neighbors) given a certain similarity metric.

Privacy Preserving

DeepSecure: Scalable Provably-Secure Deep Learning

no code implementations24 May 2017 Bita Darvish Rouhani, M. Sadegh Riazi, Farinaz Koushanfar

This paper proposes DeepSecure, a novel framework that enables scalable execution of the state-of-the-art Deep Learning (DL) models in a privacy-preserving setting.

Cryptography and Security

DeepFense: Online Accelerated Defense Against Adversarial Deep Learning

no code implementations8 Sep 2017 Bita Darvish Rouhani, Mohammad Samragh, Mojan Javaheripi, Tara Javidi, Farinaz Koushanfar

Recent advances in adversarial Deep Learning (DL) have opened up a largely unexplored surface for malicious attacks jeopardizing the integrity of autonomous DL systems.

ReBNet: Residual Binarized Neural Network

1 code implementation3 Nov 2017 Mohammad Ghasemzadeh, Mohammad Samragh, Farinaz Koushanfar

We show that the state-of-the-art methods for optimizing binary networks accuracy, significantly increase the implementation cost and complexity.

Binarization General Classification

ResBinNet: Residual Binary Neural Network

no code implementations ICLR 2018 Mohammad Ghasemzadeh, Mohammad Samragh, Farinaz Koushanfar

Recent efforts on training light-weight binary neural networks offer promising execution/memory efficiency.

Binarization

Towards Safe Deep Learning: Unsupervised Defense Against Generic Adversarial Attacks

no code implementations ICLR 2018 Bita Darvish Rouhani, Mohammad Samragh, Tara Javidi, Farinaz Koushanfar

We introduce a novel automated countermeasure called Parallel Checkpointing Learners (PCL) to thwart the potential adversarial attacks and significantly improve the reliability (safety) of a victim DL model.

Chameleon: A Hybrid Secure Computation Framework for Machine Learning Applications

no code implementations10 Jan 2018 M. Sadegh Riazi, Christian Weinert, Oleksandr Tkachenko, Ebrahim. M. Songhori, Thomas Schneider, Farinaz Koushanfar

Chameleon departs from the common assumption of additive or linear secret sharing models where three or more parties need to communicate in the online phase: the framework allows two parties with private inputs to communicate in the online phase under the assumption of a third node generating correlated randomness in an offline phase.

BIG-bench Machine Learning

DeepSigns: A Generic Watermarking Framework for IP Protection of Deep Learning Models

2 code implementations2 Apr 2018 Bita Darvish Rouhani, Huili Chen, Farinaz Koushanfar

The resulting models are therefore considered to be the IP of the model builder and need to be protected to preserve the owner's competitive advantage.

Cryptography and Security

AgileNet: Lightweight Dictionary-based Few-shot Learning

no code implementations21 May 2018 Mohammad Ghasemzadeh, Fang Lin, Bita Darvish Rouhani, Farinaz Koushanfar, Ke Huang

The success of deep learning models is heavily tied to the use of massive amount of labeled data and excessively long training time.

Few-Shot Learning

RAPIDNN: In-Memory Deep Neural Network Acceleration Framework

no code implementations15 Jun 2018 Mohsen Imani, Mohammad Samragh, Yeseong Kim, Saransh Gupta, Farinaz Koushanfar, Tajana Rosing

To enable in-memory processing, RAPIDNN reinterprets a DNN model and maps it into a specialized accelerator, which is designed using non-volatile memory blocks that model four fundamental DNN operations, i. e., multiplication, addition, activation functions, and pooling.

Clustering speech-recognition +3

Adversarial Reprogramming of Text Classification Neural Networks

1 code implementation IJCNLP 2019 Paarth Neekhara, Shehzeen Hussain, Shlomo Dubnov, Farinaz Koushanfar

Adversarial Reprogramming has demonstrated success in utilizing pre-trained neural network classifiers for alternative classification tasks without modification to the original network.

General Classification text-classification +1

CodeX: Bit-Flexible Encoding for Streaming-based FPGA Acceleration of DNNs

no code implementations17 Jan 2019 Mohammad Samragh, Mojan Javaheripi, Farinaz Koushanfar

CodeX incorporates nonlinear encoding to the computation flow of neural networks to save memory.

Peer-to-peer Federated Learning on Graphs

no code implementations31 Jan 2019 Anusha Lalitha, Osman Cihan Kilinc, Tara Javidi, Farinaz Koushanfar

We consider the problem of training a machine learning model over a network of nodes in a fully decentralized framework.

Federated Learning

SWNet: Small-World Neural Networks and Rapid Convergence

no code implementations9 Apr 2019 Mojan Javaheripi, Bita Darvish Rouhani, Farinaz Koushanfar

This transformation leverages our important observation that for a set level of accuracy, convergence is fastest when network topology reaches the boundary of a Small-World Network.

General Classification Image Classification

BlackMarks: Black-box Multi-bit Watermarking for Deep Neural Networks

no code implementations ICLR 2019 Huili Chen, Bita Darvish Rouhani, Farinaz Koushanfar

To extract the WM, BlackMarks queries the model with the WM key images and decodes the owner’s signature from the corresponding predictions using the designed encoding scheme.

Universal Adversarial Perturbations for Speech Recognition Systems

no code implementations9 May 2019 Paarth Neekhara, Shehzeen Hussain, Prakhar Pandey, Shlomo Dubnov, Julian McAuley, Farinaz Koushanfar

In this work, we demonstrate the existence of universal adversarial audio perturbations that cause mis-transcription of audio signals by automatic speech recognition (ASR) systems.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Decentralized Bayesian Learning over Graphs

no code implementations24 May 2019 Anusha Lalitha, Xinghan Wang, Osman Kilinc, Yongxi Lu, Tara Javidi, Farinaz Koushanfar

The proposed algorithm can be viewed as a Bayesian and peer-to-peer variant of federated learning in which each agent keeps a "posterior probability distribution" over a global model parameters.

Bayesian Inference Federated Learning

A Neural-based Program Decompiler

no code implementations28 Jun 2019 Cheng Fu, Huili Chen, Haolan Liu, Xinyun Chen, Yuandong Tian, Farinaz Koushanfar, Jishen Zhao

Reverse engineering of binary executables is a critical problem in the computer security domain.

Computer Security Malware Detection

ASCAI: Adaptive Sampling for acquiring Compact AI

no code implementations15 Nov 2019 Mojan Javaheripi, Mohammad Samragh, Tara Javidi, Farinaz Koushanfar

This paper introduces ASCAI, a novel adaptive sampling methodology that can learn how to effectively compress Deep Neural Networks (DNNs) for accelerated inference on resource-constrained platforms.

Model Compression

Scratch that! An Evolution-based Adversarial Attack against Neural Networks

1 code implementation5 Dec 2019 Malhar Jere, Loris Rossi, Briland Hitaj, Gabriela Ciocarlie, Giacomo Boracchi, Farinaz Koushanfar

We study black-box adversarial attacks for image classifiers in a constrained threat model, where adversaries can only modify a small fraction of pixels in the form of scratches on an image.

Adversarial Attack Image Captioning +1

Principal Component Properties of Adversarial Samples

no code implementations7 Dec 2019 Malhar Jere, Sandro Herbig, Christine Lind, Farinaz Koushanfar

Deep Neural Networks for image classification have been found to be vulnerable to adversarial samples, which consist of sub-perceptual noise added to a benign image that can easily fool trained neural networks, posing a significant risk to their commercial deployment.

Image Classification

FastWave: Accelerating Autoregressive Convolutional Neural Networks on FPGA

no code implementations9 Feb 2020 Shehzeen Hussain, Mojan Javaheripi, Paarth Neekhara, Ryan Kastner, Farinaz Koushanfar

While WaveNet produces state-of-the art audio generation results, the naive inference implementation is quite slow; it takes a few minutes to generate just one second of audio on a high-end GPU.

Audio Generation Audio Synthesis +3

SynFi: Automatic Synthetic Fingerprint Generation

1 code implementation16 Feb 2020 M. Sadegh Riazi, Seyed M. Chavoshian, Farinaz Koushanfar

Authentication and identification methods based on human fingerprints are ubiquitous in several systems ranging from government organizations to consumer products.

Super-Resolution

GeneCAI: Genetic Evolution for Acquiring Compact AI

no code implementations8 Apr 2020 Mojan Javaheripi, Mohammad Samragh, Tara Javidi, Farinaz Koushanfar

In the contemporary big data realm, Deep Neural Networks (DNNs) are evolving towards more complex architectures to achieve higher inference accuracy.

Model Compression

CLEANN: Accelerated Trojan Shield for Embedded Neural Networks

no code implementations4 Sep 2020 Mojan Javaheripi, Mohammad Samragh, Gregory Fields, Tara Javidi, Farinaz Koushanfar

We propose CLEANN, the first end-to-end framework that enables online mitigation of Trojans for embedded Deep Neural Network (DNN) applications.

Dictionary Learning

A Singular Value Perspective on Model Robustness

no code implementations7 Dec 2020 Malhar Jere, Maghav Kumar, Farinaz Koushanfar

Convolutional Neural Networks (CNNs) have made significant progress on several computer vision benchmarks, but are fraught with numerous non-human biases such as vulnerability to adversarial samples.

ProFlip: Targeted Trojan Attack With Progressive Bit Flips

no code implementations ICCV 2021 Huili Chen, Cheng Fu, Jishen Zhao, Farinaz Koushanfar

In this work, we present ProFlip, the first targeted Trojan attack framework that can divert the prediction of the DNN to the target class by progressively identifying and flipping a small set of bits in model parameters.

Expressive Neural Voice Cloning

no code implementations30 Jan 2021 Paarth Neekhara, Shehzeen Hussain, Shlomo Dubnov, Farinaz Koushanfar, Julian McAuley

In this work, we propose a controllable voice cloning method that allows fine-grained control over various style aspects of the synthesized speech for an unseen speaker.

Speech Synthesis Style Transfer +1

TAD: Trigger Approximation based Black-box Trojan Detection for AI

no code implementations3 Feb 2021 Xinqiao Zhang, Huili Chen, Farinaz Koushanfar

While DNNs are widely employed in security-sensitive fields, they are identified to be vulnerable to Neural Trojan (NT) attacks that are controlled and activated by the stealthy trigger.

Autonomous Driving Medical Diagnosis

Cross-modal Adversarial Reprogramming

1 code implementation15 Feb 2021 Paarth Neekhara, Shehzeen Hussain, Jinglong Du, Shlomo Dubnov, Farinaz Koushanfar, Julian McAuley

Recent works on adversarial reprogramming have shown that it is possible to repurpose neural networks for alternate tasks without modifying the network architecture or parameters.

Classification General Classification +1

Unsupervised Information Obfuscation for Split Inference of Neural Networks

no code implementations23 Apr 2021 Mohammad Samragh, Hossein Hosseini, Aleksei Triastcyn, Kambiz Azarian, Joseph Soriaga, Farinaz Koushanfar

In our method, the edge device runs the model up to a split layer determined based on its computational capacity.

Trojan Signatures in DNN Weights

no code implementations7 Sep 2021 Greg Fields, Mohammad Samragh, Mojan Javaheripi, Farinaz Koushanfar, Tara Javidi

Deep neural networks have been shown to be vulnerable to backdoor, or trojan, attacks where an adversary has embedded a trigger in the network at training time such that the model correctly classifies all standard inputs, but generates a targeted, incorrect classification on any input which contains the trigger.

HASHTAG: Hash Signatures for Online Detection of Fault-Injection Attacks on Deep Neural Networks

no code implementations2 Nov 2021 Mojan Javaheripi, Farinaz Koushanfar

We propose HASHTAG, the first framework that enables high-accuracy detection of fault-injection attacks on Deep Neural Networks (DNNs) with provable bounds on detection performance.

Fault Detection

Backdoor Defense in Federated Learning Using Differential Testing and Outlier Detection

no code implementations21 Feb 2022 Yein Kim, Huili Chen, Farinaz Koushanfar

The goal of federated learning (FL) is to train one global model by aggregating model parameters updated independently on edge devices without accessing users' private data.

backdoor defense Federated Learning +1

RoVISQ: Reduction of Video Service Quality via Adversarial Attacks on Deep Learning-based Video Compression

no code implementations18 Mar 2022 Jung-Woo Chang, Mojan Javaheripi, Seira Hidano, Farinaz Koushanfar

In this paper, we conduct the first systematic study for adversarial attacks on deep learning-based video compression and downstream classification systems.

Adversarial Attack Classification +4

FaceSigns: Semi-Fragile Neural Watermarks for Media Authentication and Countering Deepfakes

1 code implementation5 Apr 2022 Paarth Neekhara, Shehzeen Hussain, Xinqiao Zhang, Ke Huang, Julian McAuley, Farinaz Koushanfar

We demonstrate that FaceSigns can embed a 128 bit secret as an imperceptible image watermark that can be recovered with a high bit recovery accuracy at several compression levels, while being non-recoverable when unseen Deepfake manipulations are applied.

Face Swapping Image Compression +1

An Adaptive Black-box Backdoor Detection Method for Deep Neural Networks

1 code implementation8 Apr 2022 Xinqiao Zhang, Huili Chen, Ke Huang, Farinaz Koushanfar

Deep Neural Networks (DNNs) have demonstrated unprecedented performance across various fields such as medical diagnosis and autonomous driving.

Autonomous Driving Medical Diagnosis

AdaTest:Reinforcement Learning and Adaptive Sampling for On-chip Hardware Trojan Detection

no code implementations12 Apr 2022 Huili Chen, Xinqiao Zhang, Ke Huang, Farinaz Koushanfar

This paper proposes AdaTest, a novel adaptive test pattern generation framework for efficient and reliable Hardware Trojan (HT) detection.

Backdoor Attack Reinforcement Learning (RL)

Adversarial Scratches: Deployable Attacks to CNN Classifiers

1 code implementation20 Apr 2022 Loris Giulivi, Malhar Jere, Loris Rossi, Farinaz Koushanfar, Gabriela Ciocarlie, Briland Hitaj, Giacomo Boracchi

We present Adversarial Scratches: a novel L0 black-box attack, which takes the form of scratches in images, and which possesses much greater deployability than other state-of-the-art attacks.

ReFace: Real-time Adversarial Attacks on Face Recognition Systems

no code implementations9 Jun 2022 Shehzeen Hussain, Todd Huster, Chris Mesterharm, Paarth Neekhara, Kevin An, Malhar Jere, Harshvardhan Sikka, Farinaz Koushanfar

We find that the white-box attack success rate of a pure U-Net ATN falls substantially short of gradient-based attacks like PGD on large face recognition datasets.

Face Identification Face Recognition +1

zPROBE: Zero Peek Robustness Checks for Federated Learning

no code implementations ICCV 2023 Zahra Ghodsi, Mojan Javaheripi, Nojan Sheybani, Xinqiao Zhang, Ke Huang, Farinaz Koushanfar

However, keeping the individual updates private allows malicious users to perform Byzantine attacks and degrade the accuracy without being detected.

Federated Learning Privacy Preserving

FastStamp: Accelerating Neural Steganography and Digital Watermarking of Images on FPGAs

no code implementations26 Sep 2022 Shehzeen Hussain, Nojan Sheybani, Paarth Neekhara, Xinqiao Zhang, Javier Duarte, Farinaz Koushanfar

In this work, we design the first accelerator platform FastStamp to perform DNN based steganography and digital watermarking of images on hardware.

Image Steganography

Tailor: Altering Skip Connections for Resource-Efficient Inference

no code implementations18 Jan 2023 Olivia Weng, Gabriel Marcano, Vladimir Loncar, Alireza Khodamoradi, Nojan Sheybani, Andres Meza, Farinaz Koushanfar, Kristof Denolf, Javier Mauricio Duarte, Ryan Kastner

We argue that while a network's skip connections are needed for the network to learn, they can later be removed or shortened to provide a more hardware efficient implementation with minimal to no accuracy loss.

NetFlick: Adversarial Flickering Attacks on Deep Learning Based Video Compression

no code implementations4 Apr 2023 Jung-Woo Chang, Nojan Sheybani, Shehzeen Samarah Hussain, Mojan Javaheripi, Seira Hidano, Farinaz Koushanfar

Experimental results demonstrate that NetFlick can successfully deteriorate the performance of video compression frameworks in both digital- and physical-settings and can be further extended to attack downstream video classification networks.

Video Classification Video Compression

SureFED: Robust Federated Learning via Uncertainty-Aware Inward and Outward Inspection

no code implementations4 Aug 2023 Nasimeh Heydaribeni, Ruisi Zhang, Tara Javidi, Cristina Nita-Rotaru, Farinaz Koushanfar

We theoretically prove the robustness of our algorithm against data and model poisoning attacks in a decentralized linear regression setting.

Federated Learning Image Classification +1

SelfVC: Voice Conversion With Iterative Refinement using Self Transformations

no code implementations14 Oct 2023 Paarth Neekhara, Shehzeen Hussain, Rafael Valle, Boris Ginsburg, Rishabh Ranjan, Shlomo Dubnov, Farinaz Koushanfar, Julian McAuley

In this work, instead of explicitly disentangling attributes with loss terms, we present a framework to train a controllable voice conversion model on entangled speech representations derived from self-supervised learning and speaker verification models.

Self-Supervised Learning Speaker Verification +2

REMARK-LLM: A Robust and Efficient Watermarking Framework for Generative Large Language Models

no code implementations18 Oct 2023 Ruisi Zhang, Shehzeen Samarah Hussain, Paarth Neekhara, Farinaz Koushanfar

We present REMARK-LLM, a novel efficient, and robust watermarking framework designed for texts generated by large language models (LLMs).

Retrieval

Magmaw: Modality-Agnostic Adversarial Attacks on Machine Learning-Based Wireless Communication Systems

no code implementations1 Nov 2023 Jung-Woo Chang, Ke Sun, Nasimeh Heydaribeni, Seira Hidano, Xinyu Zhang, Farinaz Koushanfar

Although there have been a number of adversarial attacks on ML-based wireless systems, the existing methods do not provide a comprehensive view including multi-modality of the source data, common physical layer components, and wireless domain constraints.

LiveTune: Dynamic Parameter Tuning for Training Deep Neural Networks

1 code implementation28 Nov 2023 Soheil Zibakhsh Shabgahi, Nojan Sheybani, Aiden Tabrizi, Farinaz Koushanfar

Traditional machine learning training is a static process that lacks real-time adaptability of hyperparameters.

LayerCollapse: Adaptive compression of neural networks

no code implementations29 Nov 2023 Soheil Zibakhsh Shabgahi, Mohammad Sohail Shariff, Farinaz Koushanfar

Handling the ever-increasing scale of contemporary deep learning and transformer-based models poses a significant challenge.

Computational Efficiency Image Classification +3

EmMark: Robust Watermarks for IP Protection of Embedded Quantized Large Language Models

no code implementations27 Feb 2024 Ruisi Zhang, Farinaz Koushanfar

This paper introduces EmMark, a novel watermarking framework for protecting the intellectual property (IP) of embedded large language models deployed on resource-constrained edge devices.

Token-Specific Watermarking with Enhanced Detectability and Semantic Coherence for Large Language Models

1 code implementation28 Feb 2024 Mingjia Huo, Sai Ashish Somayajula, Youwei Liang, Ruisi Zhang, Farinaz Koushanfar, Pengtao Xie

Large language models generate high-quality responses with potential misinformation, underscoring the need for regulation by distinguishing AI-generated and human-written texts.

Misinformation

Cannot find the paper you are looking for? You can Submit a new open access paper.