1 code implementation • 31 May 2023 • Shubham Ugare, Tarun Suresh, Debangshu Banerjee, Gagandeep Singh, Sasa Misailovic
We experimentally demonstrate the effectiveness of our approach, showing up to 3x certification speedup over the certification that applies randomized smoothing of the approximate model from scratch.
no code implementations • 18 May 2023 • Yinglun Xu, Gagandeep Singh
We leverage a general framework and find conditions to ensure efficient attack under a general assumption of the learning algorithms.
1 code implementation • 4 Apr 2023 • Shubham Ugare, Debangshu Banerjee, Sasa Misailovic, Gagandeep Singh
Complete verification of deep neural networks (DNNs) can exactly determine whether the DNN satisfies a desired trustworthy property (e. g., robustness, fairness) on an infinite set of inputs or not.
1 code implementation • 14 Feb 2023 • Isha Chaudhary, Alex Renda, Charith Mendis, Gagandeep Singh
ML-based program cost models have been shown to yield highly accurate predictions.
no code implementations • 31 Jan 2023 • Debangshu Banerjee, Avaljot Singh, Gagandeep Singh
In recent years numerous methods have been developed to formally verify the robustness of deep neural networks (DNNs).
1 code implementation • 22 Jan 2023 • Can Firtina, Nika Mansouri Ghiasi, Joel Lindegger, Gagandeep Singh, Meryem Banu Cavlak, Haiyu Mao, Onur Mutlu
RawHash achieves an accurate hash-based similarity search via an effective quantization of the raw signals such that signals corresponding to the same DNA content have the same quantized value and, subsequently, the same hash value.
1 code implementation • 9 Dec 2022 • Meryem Banu Cavlak, Gagandeep Singh, Mohammed Alser, Can Firtina, Joël Lindegger, Mohammad Sadrosadati, Nika Mansouri Ghiasi, Can Alkan, Onur Mutlu
However, for many applications, the majority of reads do no match the reference genome of interest (i. e., target reference) and thus are discarded in later steps in the genomics pipeline, wasting the basecalling computation.
no code implementations • 22 Aug 2022 • Gagandeep Singh, Dionysios Diamantopoulos, Juan Gómez-Luna, Sander Stuijk, Henk Corporaal, Onur Mutlu
The key idea of LEAPER is to transfer an ML-based performance and resource usage model trained for a low-end edge environment to a new, high-end cloud environment to provide fast and accurate predictions for accelerator implementation.
1 code implementation • 22 Jul 2022 • Rem Yang, Jacob Laurel, Sasa Misailovic, Gagandeep Singh
Geometric image transformations that arise in the real world, such as scaling and rotation, have been shown to easily deceive deep neural networks (DNNs).
1 code implementation • 20 Jul 2022 • Saumya Gupta, Xiaoling Hu, James Kaan, Michael Jin, Mutshipay Mpoy, Katherine Chung, Gagandeep Singh, Mary Saltz, Tahsin Kurc, Joel Saltz, Apostolos Tassiopoulos, Prateek Prasanna, Chao Chen
In this paper, we introduce a novel topological interaction module to encode the topological interactions into a deep neural network.
1 code implementation • 16 Jul 2022 • Juan Gómez-Luna, Yuxin Guo, Sylvan Brocard, Julien Legriel, Remy Cimadomo, Geraldo F. Oliveira, Gagandeep Singh, Onur Mutlu
Our K-Means clustering on PIM is $2. 8\times$ and $3. 2\times$ than state-of-the-art CPU and GPU versions, respectively.
no code implementations • 22 Jun 2022 • Changming Xu, Gagandeep Singh
We further show that by using a set of primitive transformations our method can generalize well to unseen transformations such as fog, JPEG compression, etc.
no code implementations • 13 Jun 2022 • Juan Gómez-Luna, Yuxin Guo, Sylvan Brocard, Julien Legriel, Remy Cimadomo, Geraldo F. Oliveira, Gagandeep Singh, Onur Mutlu
Our goal is to understand the potential of modern general-purpose PIM architectures to accelerate machine learning training.
1 code implementation • 30 May 2022 • Yinglun Xu, Qi Zeng, Gagandeep Singh
We study reward poisoning attacks on online deep reinforcement learning (DRL), where the attacker is oblivious to the learning algorithm used by the agent and the dynamics of the environment.
1 code implementation • 16 May 2022 • Mohammed Alser, Joel Lindegger, Can Firtina, Nour Almadhoun, Haiyu Mao, Gagandeep Singh, Juan Gomez-Luna, Onur Mutlu
We hope that these efforts and the challenges we discuss provide a foundation for future work in making genome analysis more intelligent.
1 code implementation • 15 May 2022 • Gagandeep Singh, Rakesh Nadig, Jisung Park, Rahul Bera, Nastaran Hajinazar, David Novo, Juan Gómez-Luna, Sander Stuijk, Henk Corporaal, Onur Mutlu
We introduce Sibyl, the first technique that uses reinforcement learning for data placement in hybrid storage systems.
1 code implementation • 7 Mar 2022 • Haoze Wu, Clark Barrett, Mahmood Sharif, Nina Narodytska, Gagandeep Singh
Recently, Graph Neural Networks (GNNs) have been applied for scheduling jobs over clusters, achieving better performance than hand-crafted heuristics.
1 code implementation • 18 Jan 2022 • Lei Zhou, Joseph Bae, Huidong Liu, Gagandeep Singh, Jeremy Green, Amit Gupta, Dimitris Samaras, Prateek Prasanna
Well-labeled datasets of chest radiographs (CXRs) are difficult to acquire due to the high cost of annotation.
1 code implementation • 16 Dec 2021 • Can Firtina, Jisung Park, Mohammed Alser, Jeremie S. Kim, Damla Senol Cali, Taha Shahroodi, Nika Mansouri Ghiasi, Gagandeep Singh, Konstantinos Kanellopoulos, Can Alkan, Onur Mutlu
We introduce BLEND, the first efficient and accurate mechanism that can identify both exact-matching and highly similar seeds with a single lookup of their hash values, called fuzzy seed matches.
no code implementations • 13 Oct 2021 • Arvid Frydenlund, Gagandeep Singh, Frank Rudzicz
We also develop a method using $N$-grams to create a non-probabilistic teacher which generates the ranks without the need of a pre-trained LM.
no code implementations • 29 Sep 2021 • Rodolfo Octavio Siller Quintanilla, Xiaying Wang, Michael Hersche, Luca Benini, Gagandeep Singh
We propose new methods to induce denial-of-service attacks and incorporate domain-specific insights and constraints to accomplish two key goals: (i) create smooth adversarial attacks that are physiologically plausible; (ii) consider the realistic case where the attack happens at the origin of the signal acquisition and it propagates on the human head.
1 code implementation • 1 Sep 2021 • Marc Fischer, Christian Sprecher, Dimitar I. Dimitrov, Gagandeep Singh, Martin Vechev
We perform an extensive experimental evaluation to demonstrate the effectiveness of shared certificates in reducing the verification cost on a range of datasets and attack specifications on image classifiers including the popular patch and geometric perturbations.
no code implementations • 18 Jul 2021 • Aishik Konwer, Joseph Bae, Gagandeep Singh, Rishabh Gattu, Syed Ali, Jeremy Green, Tej Phatak, Prateek Prasanna
This vector is used as an input to a decoder module to predict patch severity grades at a future timepoint.
no code implementations • 13 Jul 2021 • Sudhir Suman, Gagandeep Singh, Nicole Sakla, Rishabh Gattu, Jeremy Green, Tej Phatak, Dimitris Samaras, Prateek Prasanna
In this study we propose a two-stage attention-based CNN-LSTM network for predicting PE, its associated type (chronic, acute) and corresponding location (leftsided, rightsided or central) on computed tomography (CT) examinations.
1 code implementation • ICCV 2021 • Tobias Lorenz, Anian Ruoss, Mislav Balunović, Gagandeep Singh, Martin Vechev
In this work, we address this challenge and introduce 3DCertify, the first verifier able to certify the robustness of point cloud models.
no code implementations • 12 Mar 2021 • Marwa Ismail, Prateek Prasanna, Kaustav Bera, Volodymyr Statsevych, Virginia Hill, Gagandeep Singh, Sasan Partovi, Niha Beig, Sean McGarry, Peter Laviolette, Manmeet Ahluwalia, Anant Madabhushi, Pallavi Tiwari
Our work is based on the rationale that highly aggressive tumors tend to grow uncontrollably, leading to pronounced biomechanical tissue deformations in the normal parenchyma, which when combined with local morphological differences in the tumor confines on MRI scans, will comprehensively capture tumor field effect.
no code implementations • 5 Mar 2021 • Mark Niklas Müller, Gleb Makarchuk, Gagandeep Singh, Markus Püschel, Martin Vechev
Formal verification of neural networks is critical for their safe adoption in real-world applications.
1 code implementation • 2 Feb 2021 • Gagandeep Singh, Deepak Mishra
In this paper, we propose a very simple approach for enhancing the ability of a pretrained network to detect OOD inputs without even altering the original parameter values.
Out-of-Distribution Detection
Out of Distribution (OOD) Detection
no code implementations • ICLR 2022 • Dimitar I. Dimitrov, Gagandeep Singh, Timon Gehr, Martin Vechev
We introduce the concept of provably robust adversarial examples for deep neural networks - connected input regions constructed from standard adversarial examples which are guaranteed to be robust to a set of real-world perturbations (such as changes in pixel intensity and geometric transformations).
no code implementations • 20 Jul 2020 • Christoph Müller, François Serre, Gagandeep Singh, Markus Püschel, Martin Vechev
GPUPoly scales to large networks: for example, it can prove the robustness of a 1M neuron, 34-layer deep residual network in approximately 34. 5 ms. We believe GPUPoly is a promising step towards practical verification of real-world neural networks.
no code implementations • 15 Jul 2020 • Joseph Bae, Saarthak Kapse, Gagandeep Singh, Rishabh Gattu, Syed Ali, Neal Shah, Colin Marshall, Jonathan Pierce, Tej Phatak, Amit Gupta, Jeremy Green, Nikhil Madan, Prateek Prasanna
Radiomic and DL classification models had mAUCs of 0. 78+/-0. 02 and 0. 81+/-0. 04, compared with expert scores mAUCs of 0. 75+/-0. 02 and 0. 79+/-0. 05 for mechanical ventilation requirement and mortality prediction, respectively.
no code implementations • 16 Jun 2020 • Marwa Ismail, Virginia Hill, Volodymyr Statsevych, Evan Mason, Ramon Correa, Prateek Prasanna, Gagandeep Singh, Kaustav Bera, Rajat Thawani, Anant Madabhushi, Manmeet Ahluwalia, Pallavi Tiwari
In this study, 74 pre-treatment Glioblastoma MRI scans with PsP (33) and tumor recurrence (41) were analyzed.
1 code implementation • 27 May 2020 • Wonryong Ryou, Jiayu Chen, Mislav Balunovic, Gagandeep Singh, Andrei Dan, Martin Vechev
We present a scalable and precise verifier for recurrent neural networks, called Prover based on two novel ideas: (i) a method to compute a set of polyhedral abstractions for the non-convex and nonlinear recurrent update functions by combining sampling, optimization, and Fermat's theorem, and (ii) a gradient descent based algorithm for abstraction refinement guided by the certification problem that combines multiple abstractions for each neuron.
no code implementations • 20 Apr 2020 • Dionysios Diamantopoulos, Burkhard Ringlein, Mitra Purandare, Gagandeep Singh, Christoph Hagleitner
Specialized accelerators for tensor-operations, such as blocked-matrix operations and multi-dimensional convolutions, have been emerged as powerful architecture choices for high-performance Deep-Learning computing.
1 code implementation • ICML 2020 • Raphaël Dang-Nhu, Gagandeep Singh, Pavol Bielik, Martin Vechev
We develop an effective generation of adversarial attacks on neural models that output a sequence of probability distributions rather than a sequence of single values.
1 code implementation • NeurIPS 2019 • Gagandeep Singh, Rupanshu Ganvir, Markus Püschel, Martin Vechev
We propose a new parametric framework, called k-ReLU, for computing precise and scalable convex relaxations used to certify neural networks.
1 code implementation • NeurIPS 2019 • Mislav Balunovic, Maximilian Baader, Gagandeep Singh, Timon Gehr, Martin Vechev
The use of neural networks in safety-critical computer vision systems calls for their robustness certification against natural geometric transformations (e. g., rotation, scaling).
no code implementations • 25 Sep 2019 • Wonryong Ryou, Mislav Balunovic, Gagandeep Singh, Martin Vechev
We present the first end-to-end verifier of audio classifiers.
no code implementations • ICLR 2019 • Gagandeep Singh, Timon Gehr, Markus Püschel, Martin Vechev
We present a novel approach for verification of neural networks which combines scalable over-approximation methods with precise (mixed integer) linear programming.
1 code implementation • 29 Mar 2019 • Matthew Mirman, Gagandeep Singh, Martin Vechev
We present a training system, which can provably defend significantly larger neural networks than previously possible, including ResNet-34 and DenseNet-100.
no code implementations • NeurIPS 2018 • Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus Püschel, Martin Vechev
We present a new method and system, called DeepZ, for certifying neural network robustness based on abstract interpretation.