Search Results for author: Siddharth Garg

Found 35 papers, 7 papers with code

INVICTUS: Optimizing Boolean Logic Circuit Synthesis via Synergistic Learning and Search

no code implementations22 May 2023 Animesh Basak Chowdhury, Marco Romanelli, Benjamin Tan, Ramesh Karri, Siddharth Garg

%Compared to prior work, INVICTUS is the first solution that uses a mix of RL and search methods joint with an online out-of-distribution detector to generate synthesis recipes over a wide range of benchmarks.

Reinforcement Learning (RL)

Chip-Chat: Challenges and Opportunities in Conversational Hardware Design

no code implementations22 May 2023 Jason Blocklove, Siddharth Garg, Ramesh Karri, Hammond Pearce

Commercially-available instruction-tuned Large Language Models (LLMs) such as OpenAI's ChatGPT and Google's Bard claim to be able to produce code in a variety of programming languages; but studies examining them for hardware are still lacking.

Can deepfakes be created by novice users?

no code implementations28 Apr 2023 Pulak Mehta, Gauri Jagatap, Kevin Gallagher, Brian Timmerman, Progga Deb, Siddharth Garg, Rachel Greenstadt, Brendan Dolan-Gavitt

We conclude that creating Deepfakes is a simple enough task for a novice user given adequate tools and time; however, the resulting Deepfakes are not sufficiently real-looking and are unable to completely fool detection software as well as human examiners

DeepFake Detection Face Swapping

Precoding-oriented Massive MIMO CSI Feedback Design

no code implementations22 Feb 2023 Fabrizio Carpi, Sivarama Venkatesan, Jinfeng Du, Harish Viswanathan, Siddharth Garg, Elza Erkip

Downlink massive multiple-input multiple-output (MIMO) precoding algorithms in frequency division duplexing (FDD) systems rely on accurate channel state information (CSI) feedback from users.

A Minimax Approach Against Multi-Armed Adversarial Attacks Detection

no code implementations4 Feb 2023 Federica Granese, Marco Romanelli, Siddharth Garg, Pablo Piantanida

Multi-armed adversarial attacks, in which multiple algorithms and objective loss functions are simultaneously used at evaluation time, have been shown to be highly successful in fooling state-of-the-art adversarial examples detectors while requiring no specific side information about the detection mechanism.

Privacy-Preserving Collaborative Learning through Feature Extraction

no code implementations13 Dec 2022 Alireza Sarmadi, Hao Fu, Prashanth Krishnamurthy, Siddharth Garg, Farshad Khorrami

As a baseline, in Cooperatively Trained Feature Extractor (CTFE) Learning, the entities train models by sharing raw data.

Fraud Detection Inference Attack +2

Fairness via In-Processing in the Over-parameterized Regime: A Cautionary Tale

no code implementations29 Jun 2022 Akshaj Kumar Veldanda, Ivan Brugere, Jiahao Chen, Sanghamitra Dutta, Alan Mishler, Siddharth Garg

We further show that MinDiff optimization is very sensitive to choice of batch size in the under-parameterized regime.


MALICE: Manipulation Attacks on Learned Image ComprEssion

no code implementations26 May 2022 Kang Liu, Di wu, Yiru Wang, Dan Feng, Benjamin Tan, Siddharth Garg

To characterize the robustness of state-of-the-art learned image compression, we mount white-box and black-box attacks.

Image Compression Image Reconstruction

Feature Compression for Rate Constrained Object Detection on the Edge

no code implementations15 Apr 2022 Zhongzheng Yuan, Samyak Rawlekar, Siddharth Garg, Elza Erkip, Yao Wang

In this work, we consider a "split computation" system to offload a part of the computation of the YOLO object detection model.

Feature Compression object-detection +1

Too Big to Fail? Active Few-Shot Learning Guided Logic Synthesis

1 code implementation5 Apr 2022 Animesh Basak Chowdhury, Benjamin Tan, Ryan Carey, Tushit Jain, Ramesh Karri, Siddharth Garg

Generating sub-optimal synthesis transformation sequences ("synthesis recipe") is an important problem in logic synthesis.

BIG-bench Machine Learning Few-Shot Learning

Selective Network Linearization for Efficient Private Inference

1 code implementation4 Feb 2022 Minsu Cho, Ameya Joshi, Siddharth Garg, Brandon Reagen, Chinmay Hegde

To reduce PI latency we propose a gradient-based algorithm that selectively linearizes ReLUs while maintaining prediction accuracy.

OpenABC-D: A Large-Scale Dataset For Machine Learning Guided Integrated Circuit Synthesis

1 code implementation21 Oct 2021 Animesh Basak Chowdhury, Benjamin Tan, Ramesh Karri, Siddharth Garg

Logic synthesis is a challenging and widely-researched combinatorial optimization problem during integrated circuit (IC) design.

Benchmarking BIG-bench Machine Learning +1

Sphynx: ReLU-Efficient Network Design for Private Inference

no code implementations17 Jun 2021 Minsu Cho, Zahra Ghodsi, Brandon Reagen, Siddharth Garg, Chinmay Hegde

The emergence of deep learning has been accompanied by privacy concerns surrounding users' data and service providers' models.

Circa: Stochastic ReLUs for Private Deep Learning

no code implementations NeurIPS 2021 Zahra Ghodsi, Nandan Kumar Jha, Brandon Reagen, Siddharth Garg

In this paper we re-think the ReLU computation and propose optimizations for PI tailored to properties of neural networks.

Generating and Characterizing Scenarios for Safety Testing of Autonomous Vehicles

no code implementations12 Mar 2021 Zahra Ghodsi, Siva Kumar Sastry Hari, Iuri Frosio, Timothy Tsai, Alejandro Troccoli, Stephen W. Keckler, Siddharth Garg, Anima Anandkumar

Extracting interesting scenarios from real-world data as well as generating failure cases is important for the development and testing of autonomous systems.

Autonomous Vehicles

DeepReDuce: ReLU Reduction for Fast Private Inference

no code implementations2 Mar 2021 Nandan Kumar Jha, Zahra Ghodsi, Siddharth Garg, Brandon Reagen

This paper proposes DeepReDuce: a set of optimizations for the judicious removal of ReLUs to reduce private inference latency.

Bait and Switch: Online Training Data Poisoning of Autonomous Driving Systems

no code implementations8 Nov 2020 Naman Patel, Prashanth Krishnamurthy, Siddharth Garg, Farshad Khorrami

We show that by controlling parts of a physical environment in which a pre-trained deep neural network (DNN) is being fine-tuned online, an adversary can launch subtle data poisoning attacks that degrade the performance of the system.

Autonomous Driving Data Poisoning

Detecting Backdoors in Neural Networks Using Novel Feature-Based Anomaly Detection

no code implementations4 Nov 2020 Hao Fu, Akshaj Kumar Veldanda, Prashanth Krishnamurthy, Siddharth Garg, Farshad Khorrami

This paper proposes a new defense against neural network backdooring attacks that are maliciously trained to mispredict in the presence of attacker-chosen triggers.

Anomaly Detection Data Augmentation

On Evaluating Neural Network Backdoor Defenses

no code implementations23 Oct 2020 Akshaj Veldanda, Siddharth Garg

Deep neural networks (DNNs) demonstrate superior performance in various fields, including scrutiny and security.

Subverting Privacy-Preserving GANs: Hiding Secrets in Sanitized Images

no code implementations19 Sep 2020 Kang Liu, Benjamin Tan, Siddharth Garg

Unprecedented data collection and sharing have exacerbated privacy concerns and led to increasing interest in privacy-preserving tools that remove sensitive attributes from images while maintaining useful information for other tasks.

Facial Expression Recognition (FER) Privacy Preserving

CryptoNAS: Private Inference on a ReLU Budget

no code implementations NeurIPS 2020 Zahra Ghodsi, Akshaj Veldanda, Brandon Reagen, Siddharth Garg

Machine learning as a service has given raise to privacy concerns surrounding clients' data and providers' models and has catalyzed research in private inference (PI): methods to process inferences without disclosing inputs.

Bias Busters: Robustifying DL-based Lithographic Hotspot Detectors Against Backdooring Attacks

no code implementations26 Apr 2020 Kang Liu, Benjamin Tan, Gaurav Rajavendra Reddy, Siddharth Garg, Yiorgos Makris, Ramesh Karri

Deep learning (DL) offers potential improvements throughout the CAD tool-flow, one promising application being lithographic hotspot detection.

Data Augmentation

NNoculation: Catching BadNets in the Wild

1 code implementation19 Feb 2020 Akshaj Kumar Veldanda, Kang Liu, Benjamin Tan, Prashanth Krishnamurthy, Farshad Khorrami, Ramesh Karri, Brendan Dolan-Gavitt, Siddharth Garg

This paper proposes a novel two-stage defense (NNoculation) against backdoored neural networks (BadNets) that, repairs a BadNet both pre-deployment and online in response to backdoored test inputs encountered in the field.

Are Adversarial Perturbations a Showstopper for ML-Based CAD? A Case Study on CNN-Based Lithographic Hotspot Detection

no code implementations25 Jun 2019 Kang Liu, Hao-Yu Yang, Yuzhe ma, Benjamin Tan, Bei Yu, Evangeline F. Y. Young, Ramesh Karri, Siddharth Garg

There is substantial interest in the use of machine learning (ML) based techniques throughout the electronic computer-aided design (CAD) flow, particularly those based on deep learning.

FATE: Fast and Accurate Timing Error Prediction Framework for Low Power DNN Accelerator Design

no code implementations2 Jul 2018 Jeff Zhang, Siddharth Garg

FATE proposes two novel ideas: (i) DelayNet, a DNN based timing model for MAC units; and (ii) a statistical sampling methodology that reduces the number of MAC operations for which timing simulations are performed.

General Classification

Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks

3 code implementations30 May 2018 Kang Liu, Brendan Dolan-Gavitt, Siddharth Garg

Our work provides the first step toward defenses against backdoor attacks in deep neural networks.

Analyzing and Mitigating the Impact of Permanent Faults on a Systolic Array Based Neural Network Accelerator

no code implementations11 Feb 2018 Jeff Zhang, Tianyu Gu, Kanad Basu, Siddharth Garg

Due to their growing popularity and computational cost, deep neural networks (DNNs) are being targeted for hardware acceleration.

General Classification

ThUnderVolt: Enabling Aggressive Voltage Underscaling and Timing Error Resilience for Energy Efficient Deep Neural Network Accelerators

no code implementations11 Feb 2018 Jeff Zhang, Kartheek Rangineni, Zahra Ghodsi, Siddharth Garg

Hardware accelerators are being increasingly deployed to boost the performance and energy efficiency of deep neural network (DNN) inference.

General Classification

BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain

8 code implementations22 Aug 2017 Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg

These results demonstrate that backdoors in neural networks are both powerful and---because the behavior of neural networks is difficult to explicate---stealthy.

BIG-bench Machine Learning

SafetyNets: Verifiable Execution of Deep Neural Networks on an Untrusted Cloud

no code implementations NeurIPS 2017 Zahra Ghodsi, Tianyu Gu, Siddharth Garg

Specifically, SafetyNets develops and implements a specialized interactive proof (IP) protocol for verifiable execution of a class of deep neural networks, i. e., those that can be represented as arithmetic circuits.

speech-recognition Speech Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.