no code implementations • 22 May 2023 • Animesh Basak Chowdhury, Marco Romanelli, Benjamin Tan, Ramesh Karri, Siddharth Garg
%Compared to prior work, INVICTUS is the first solution that uses a mix of RL and search methods joint with an online out-of-distribution detector to generate synthesis recipes over a wide range of benchmarks.
no code implementations • 22 May 2023 • Jason Blocklove, Siddharth Garg, Ramesh Karri, Hammond Pearce
Commercially-available instruction-tuned Large Language Models (LLMs) such as OpenAI's ChatGPT and Google's Bard claim to be able to produce code in a variety of programming languages; but studies examining them for hardware are still lacking.
no code implementations • 28 Apr 2023 • Pulak Mehta, Gauri Jagatap, Kevin Gallagher, Brian Timmerman, Progga Deb, Siddharth Garg, Rachel Greenstadt, Brendan Dolan-Gavitt
We conclude that creating Deepfakes is a simple enough task for a novice user given adequate tools and time; however, the resulting Deepfakes are not sufficiently real-looking and are unable to completely fool detection software as well as human examiners
no code implementations • 6 Mar 2023 • Animesh Basak Chowdhury, Lilas Alrahis, Luca Collini, Johann Knechtel, Ramesh Karri, Siddharth Garg, Ozgur Sinanoglu, Benjamin Tan
Oracle-less machine learning (ML) attacks have broken various logic locking schemes.
no code implementations • 22 Feb 2023 • Fabrizio Carpi, Sivarama Venkatesan, Jinfeng Du, Harish Viswanathan, Siddharth Garg, Elza Erkip
Downlink massive multiple-input multiple-output (MIMO) precoding algorithms in frequency division duplexing (FDD) systems rely on accurate channel state information (CSI) feedback from users.
no code implementations • 4 Feb 2023 • Federica Granese, Marco Romanelli, Siddharth Garg, Pablo Piantanida
Multi-armed adversarial attacks, in which multiple algorithms and objective loss functions are simultaneously used at evaluation time, have been shown to be highly successful in fooling state-of-the-art adversarial examples detectors while requiring no specific side information about the detection mechanism.
no code implementations • 2 Feb 2023 • Akshaj Kumar Veldanda, Ivan Brugere, Sanghamitra Dutta, Alan Mishler, Siddharth Garg
Recent work has sought to train fair models without sensitive attributes on training data.
no code implementations • 16 Dec 2022 • Hao Fu, Prashanth Krishnamurthy, Siddharth Garg, Farshad Khorrami
In domain shift analysis, we propose a theorem based on our bound.
1 code implementation • 13 Dec 2022 • Shailja Thakur, Baleegh Ahmad, Zhenxing Fan, Hammond Pearce, Benjamin Tan, Ramesh Karri, Brendan Dolan-Gavitt, Siddharth Garg
Automating hardware design could obviate a significant amount of human error from the engineering process and lead to fewer errors.
no code implementations • 13 Dec 2022 • Alireza Sarmadi, Hao Fu, Prashanth Krishnamurthy, Siddharth Garg, Farshad Khorrami
As a baseline, in Cooperatively Trained Feature Extractor (CTFE) Learning, the entities train models by sharing raw data.
no code implementations • 29 Jun 2022 • Akshaj Kumar Veldanda, Ivan Brugere, Jiahao Chen, Sanghamitra Dutta, Alan Mishler, Siddharth Garg
We further show that MinDiff optimization is very sensitive to choice of batch size in the under-parameterized regime.
no code implementations • 26 May 2022 • Kang Liu, Di wu, Yiru Wang, Dan Feng, Benjamin Tan, Siddharth Garg
To characterize the robustness of state-of-the-art learned image compression, we mount white-box and black-box attacks.
no code implementations • 15 Apr 2022 • Zhongzheng Yuan, Samyak Rawlekar, Siddharth Garg, Elza Erkip, Yao Wang
In this work, we consider a "split computation" system to offload a part of the computation of the YOLO object detection model.
1 code implementation • 5 Apr 2022 • Animesh Basak Chowdhury, Benjamin Tan, Ryan Carey, Tushit Jain, Ramesh Karri, Siddharth Garg
Generating sub-optimal synthesis transformation sequences ("synthesis recipe") is an important problem in logic synthesis.
1 code implementation • 4 Feb 2022 • Minsu Cho, Ameya Joshi, Siddharth Garg, Brandon Reagen, Chinmay Hegde
To reduce PI latency we propose a gradient-based algorithm that selectively linearizes ReLUs while maintaining prediction accuracy.
1 code implementation • 21 Oct 2021 • Animesh Basak Chowdhury, Benjamin Tan, Ramesh Karri, Siddharth Garg
Logic synthesis is a challenging and widely-researched combinatorial optimization problem during integrated circuit (IC) design.
no code implementations • 17 Jun 2021 • Minsu Cho, Zahra Ghodsi, Brandon Reagen, Siddharth Garg, Chinmay Hegde
The emergence of deep learning has been accompanied by privacy concerns surrounding users' data and service providers' models.
no code implementations • NeurIPS 2021 • Zahra Ghodsi, Nandan Kumar Jha, Brandon Reagen, Siddharth Garg
In this paper we re-think the ReLU computation and propose optimizations for PI tailored to properties of neural networks.
no code implementations • 12 Mar 2021 • Zahra Ghodsi, Siva Kumar Sastry Hari, Iuri Frosio, Timothy Tsai, Alejandro Troccoli, Stephen W. Keckler, Siddharth Garg, Anima Anandkumar
Extracting interesting scenarios from real-world data as well as generating failure cases is important for the development and testing of autonomous systems.
no code implementations • 2 Mar 2021 • Nandan Kumar Jha, Zahra Ghodsi, Siddharth Garg, Brandon Reagen
This paper proposes DeepReDuce: a set of optimizations for the judicious removal of ReLUs to reduce private inference latency.
no code implementations • 8 Nov 2020 • Naman Patel, Prashanth Krishnamurthy, Siddharth Garg, Farshad Khorrami
We show that by controlling parts of a physical environment in which a pre-trained deep neural network (DNN) is being fine-tuned online, an adversary can launch subtle data poisoning attacks that degrade the performance of the system.
no code implementations • 4 Nov 2020 • Hao Fu, Akshaj Kumar Veldanda, Prashanth Krishnamurthy, Siddharth Garg, Farshad Khorrami
This paper proposes a new defense against neural network backdooring attacks that are maliciously trained to mispredict in the presence of attacker-chosen triggers.
no code implementations • 23 Oct 2020 • Akshaj Veldanda, Siddharth Garg
Deep neural networks (DNNs) demonstrate superior performance in various fields, including scrutiny and security.
no code implementations • 19 Sep 2020 • Kang Liu, Benjamin Tan, Siddharth Garg
Unprecedented data collection and sharing have exacerbated privacy concerns and led to increasing interest in privacy-preserving tools that remove sensitive attributes from images while maintaining useful information for other tasks.
no code implementations • ICML Workshop AML 2021 • Gauri Jagatap, Ameya Joshi, Animesh Basak Chowdhury, Siddharth Garg, Chinmay Hegde
In this paper we propose a new family of algorithms, ATENT, for training adversarially robust deep neural networks.
no code implementations • NeurIPS 2020 • Zahra Ghodsi, Akshaj Veldanda, Brandon Reagen, Siddharth Garg
Machine learning as a service has given raise to privacy concerns surrounding clients' data and providers' models and has catalyzed research in private inference (PI): methods to process inferences without disclosing inputs.
no code implementations • 26 Apr 2020 • Kang Liu, Benjamin Tan, Gaurav Rajavendra Reddy, Siddharth Garg, Yiorgos Makris, Ramesh Karri
Deep learning (DL) offers potential improvements throughout the CAD tool-flow, one promising application being lithographic hotspot detection.
1 code implementation • 19 Feb 2020 • Akshaj Kumar Veldanda, Kang Liu, Benjamin Tan, Prashanth Krishnamurthy, Farshad Khorrami, Ramesh Karri, Brendan Dolan-Gavitt, Siddharth Garg
This paper proposes a novel two-stage defense (NNoculation) against backdoored neural networks (BadNets) that, repairs a BadNet both pre-deployment and online in response to backdoored test inputs encountered in the field.
no code implementations • 25 Jun 2019 • Kang Liu, Hao-Yu Yang, Yuzhe ma, Benjamin Tan, Bei Yu, Evangeline F. Y. Young, Ramesh Karri, Siddharth Garg
There is substantial interest in the use of machine learning (ML) based techniques throughout the electronic computer-aided design (CAD) flow, particularly those based on deep learning.
no code implementations • 2 Jul 2018 • Jeff Zhang, Siddharth Garg
FATE proposes two novel ideas: (i) DelayNet, a DNN based timing model for MAC units; and (ii) a statistical sampling methodology that reduces the number of MAC operations for which timing simulations are performed.
3 code implementations • 30 May 2018 • Kang Liu, Brendan Dolan-Gavitt, Siddharth Garg
Our work provides the first step toward defenses against backdoor attacks in deep neural networks.
no code implementations • 11 Feb 2018 • Jeff Zhang, Tianyu Gu, Kanad Basu, Siddharth Garg
Due to their growing popularity and computational cost, deep neural networks (DNNs) are being targeted for hardware acceleration.
no code implementations • 11 Feb 2018 • Jeff Zhang, Kartheek Rangineni, Zahra Ghodsi, Siddharth Garg
Hardware accelerators are being increasingly deployed to boost the performance and energy efficiency of deep neural network (DNN) inference.
8 code implementations • 22 Aug 2017 • Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg
These results demonstrate that backdoors in neural networks are both powerful and---because the behavior of neural networks is difficult to explicate---stealthy.
no code implementations • NeurIPS 2017 • Zahra Ghodsi, Tianyu Gu, Siddharth Garg
Specifically, SafetyNets develops and implements a specialized interactive proof (IP) protocol for verifiable execution of a class of deep neural networks, i. e., those that can be represented as arithmetic circuits.