Search Results for author: Gagandeep Singh

Found 49 papers, 27 papers with code

Beyond the Single Neuron Convex Barrier for Neural Network Certification

1 code implementation NeurIPS 2019 Gagandeep Singh, Rupanshu Ganvir, Markus Püschel, Martin Vechev

We propose a new parametric framework, called k-ReLU, for computing precise and scalable convex relaxations used to certify neural networks.

A Provable Defense for Deep Residual Networks

1 code implementation29 Mar 2019 Matthew Mirman, Gagandeep Singh, Martin Vechev

We present a training system, which can provably defend significantly larger neural networks than previously possible, including ResNet-34 and DenseNet-100.

Adversarial Defense Novel Concepts

Improving LLM Code Generation with Grammar Augmentation

1 code implementation3 Mar 2024 Shubham Ugare, Tarun Suresh, Hangoo Kang, Sasa Misailovic, Gagandeep Singh

We present SynCode a novel framework for efficient and general syntactical decoding of code with large language models (LLMs).

Code Generation valid

BLEND: A Fast, Memory-Efficient, and Accurate Mechanism to Find Fuzzy Seed Matches in Genome Analysis

1 code implementation16 Dec 2021 Can Firtina, Jisung Park, Mohammed Alser, Jeremie S. Kim, Damla Senol Cali, Taha Shahroodi, Nika Mansouri Ghiasi, Gagandeep Singh, Konstantinos Kanellopoulos, Can Alkan, Onur Mutlu

We introduce BLEND, the first efficient and accurate mechanism that can identify both exact-matching and highly similar seeds with a single lookup of their hash values, called fuzzy seed matches.

RawHash: Enabling Fast and Accurate Real-Time Analysis of Raw Nanopore Signals for Large Genomes

1 code implementation22 Jan 2023 Can Firtina, Nika Mansouri Ghiasi, Joel Lindegger, Gagandeep Singh, Meryem Banu Cavlak, Haiyu Mao, Onur Mutlu

RawHash achieves an accurate hash-based similarity search via an effective quantization of the raw signals such that signals corresponding to the same DNA content have the same quantized value and, subsequently, the same hash value.

Quantization

Adversarial Attacks on Probabilistic Autoregressive Forecasting Models

1 code implementation ICML 2020 Raphaël Dang-Nhu, Gagandeep Singh, Pavol Bielik, Martin Vechev

We develop an effective generation of adversarial attacks on neural models that output a sequence of probability distributions rather than a sequence of single values.

Decision Making Time Series +1

Certifying Geometric Robustness of Neural Networks

1 code implementation NeurIPS 2019 Mislav Balunovic, Maximilian Baader, Gagandeep Singh, Timon Gehr, Martin Vechev

The use of neural networks in safety-critical computer vision systems calls for their robustness certification against natural geometric transformations (e. g., rotation, scaling).

Bypassing the Safety Training of Open-Source LLMs with Priming Attacks

1 code implementation19 Dec 2023 Jason Vega, Isha Chaudhary, Changming Xu, Gagandeep Singh

With the recent surge in popularity of LLMs has come an ever-increasing need for LLM safety training.

Shared Certificates for Neural Network Verification

1 code implementation1 Sep 2021 Marc Fischer, Christian Sprecher, Dimitar I. Dimitrov, Gagandeep Singh, Martin Vechev

We perform an extensive experimental evaluation to demonstrate the effectiveness of shared certificates in reducing the verification cost on a range of datasets and attack specifications on image classifiers including the popular patch and geometric perturbations.

FedCompass: Efficient Cross-Silo Federated Learning on Heterogeneous Client Devices using a Computing Power Aware Scheduler

1 code implementation26 Sep 2023 Zilinghan Li, Pranshu Chaturvedi, Shilan He, Han Chen, Gagandeep Singh, Volodymyr Kindratenko, E. A. Huerta, Kibaek Kim, Ravi Madduri

Nonetheless, because of the disparity of computing resources among different clients (i. e., device heterogeneity), synchronous federated learning algorithms suffer from degraded efficiency when waiting for straggler clients.

Federated Learning

Robustness Certification for Point Cloud Models

1 code implementation ICCV 2021 Tobias Lorenz, Anian Ruoss, Mislav Balunović, Gagandeep Singh, Martin Vechev

In this work, we address this challenge and introduce 3DCertify, the first verifier able to certify the robustness of point cloud models.

Scalable Polyhedral Verification of Recurrent Neural Networks

1 code implementation27 May 2020 Wonryong Ryou, Jiayu Chen, Mislav Balunovic, Gagandeep Singh, Andrei Dan, Martin Vechev

We present a scalable and precise verifier for recurrent neural networks, called Prover based on two novel ideas: (i) a method to compute a set of polyhedral abstractions for the non-convex and nonlinear recurrent update functions by combining sampling, optimization, and Fermat's theorem, and (ii) a gradient descent based algorithm for abstraction refinement guided by the certification problem that combines multiple abstractions for each neuron.

Incremental Verification of Neural Networks

2 code implementations4 Apr 2023 Shubham Ugare, Debangshu Banerjee, Sasa Misailovic, Gagandeep Singh

Complete verification of deep neural networks (DNNs) can exactly determine whether the DNN satisfies a desired trustworthy property (e. g., robustness, fairness) on an infinite set of inputs or not.

Fairness

TargetCall: Eliminating the Wasted Computation in Basecalling via Pre-Basecalling Filtering

1 code implementation9 Dec 2022 Meryem Banu Cavlak, Gagandeep Singh, Mohammed Alser, Can Firtina, Joël Lindegger, Mohammad Sadrosadati, Nika Mansouri Ghiasi, Can Alkan, Onur Mutlu

However, for many applications, the majority of reads do no match the reference genome of interest (i. e., target reference) and thus are discarded in later steps in the genomics pipeline, wasting the basecalling computation.

Incremental Randomized Smoothing Certification

1 code implementation31 May 2023 Shubham Ugare, Tarun Suresh, Debangshu Banerjee, Gagandeep Singh, Sasa Misailovic

We experimentally demonstrate the effectiveness of our approach, showing up to 3x certification speedup over the certification that applies randomized smoothing of the approximate model from scratch.

Efficient Reward Poisoning Attacks on Online Deep Reinforcement Learning

1 code implementation30 May 2022 Yinglun Xu, Qi Zeng, Gagandeep Singh

We study reward poisoning attacks on online deep reinforcement learning (DRL), where the attacker is oblivious to the learning algorithm used by the agent and the dynamics of the environment.

Data Poisoning reinforcement-learning +1

Is Watermarking LLM-Generated Code Robust?

1 code implementation24 Mar 2024 Tarun Suresh, Shubham Ugare, Gagandeep Singh, Sasa Misailovic

We present the first study of the robustness of existing watermarking techniques on Python code generated by large language models.

Probabilistic Trust Intervals for Out of Distribution Detection

1 code implementation2 Feb 2021 Gagandeep Singh, Deepak Mishra

In this paper, we propose a very simple approach for enhancing the ability of a pretrained network to detect OOD inputs without even altering the original parameter values.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

Scalable Verification of GNN-based Job Schedulers

1 code implementation7 Mar 2022 Haoze Wu, Clark Barrett, Mahmood Sharif, Nina Narodytska, Gagandeep Singh

Recently, Graph Neural Networks (GNNs) have been applied for scheduling jobs over clusters, achieving better performance than hand-crafted heuristics.

Scheduling

COMET: Neural Cost Model Explanation Framework

1 code implementation14 Feb 2023 Isha Chaudhary, Alex Renda, Charith Mendis, Gagandeep Singh

We generate and compare COMET's explanations for the popular neural cost model, Ithemal against those for an accurate CPU simulation-based cost model, uiCA.

Fast and Effective Robustness Certification

no code implementations NeurIPS 2018 Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus Püschel, Martin Vechev

We present a new method and system, called DeepZ, for certifying neural network robustness based on abstract interpretation.

Robustness Certification with Refinement

no code implementations ICLR 2019 Gagandeep Singh, Timon Gehr, Markus Püschel, Martin Vechev

We present a novel approach for verification of neural networks which combines scalable over-approximation methods with precise (mixed integer) linear programming.

Agile Autotuning of a Transprecision Tensor Accelerator Overlay for TVM Compiler Stack

no code implementations20 Apr 2020 Dionysios Diamantopoulos, Burkhard Ringlein, Mitra Purandare, Gagandeep Singh, Christoph Hagleitner

Specialized accelerators for tensor-operations, such as blocked-matrix operations and multi-dimensional convolutions, have been emerged as powerful architecture choices for high-performance Deep-Learning computing.

Predicting Clinical Outcomes in COVID-19 using Radiomics and Deep Learning on Chest Radiographs: A Multi-Institutional Study

no code implementations15 Jul 2020 Joseph Bae, Saarthak Kapse, Gagandeep Singh, Rishabh Gattu, Syed Ali, Neal Shah, Colin Marshall, Jonathan Pierce, Tej Phatak, Amit Gupta, Jeremy Green, Nikhil Madan, Prateek Prasanna

Radiomic and DL classification models had mAUCs of 0. 78+/-0. 02 and 0. 81+/-0. 04, compared with expert scores mAUCs of 0. 75+/-0. 02 and 0. 79+/-0. 05 for mechanical ventilation requirement and mortality prediction, respectively.

Decision Making Mortality Prediction

Scaling Polyhedral Neural Network Verification on GPUs

no code implementations20 Jul 2020 Christoph Müller, François Serre, Gagandeep Singh, Markus Püschel, Martin Vechev

GPUPoly scales to large networks: for example, it can prove the robustness of a 1M neuron, 34-layer deep residual network in approximately 34. 5 ms. We believe GPUPoly is a promising step towards practical verification of real-world neural networks.

Autonomous Driving Medical Diagnosis

Provably Robust Adversarial Examples

no code implementations ICLR 2022 Dimitar I. Dimitrov, Gagandeep Singh, Timon Gehr, Martin Vechev

We introduce the concept of provably robust adversarial examples for deep neural networks - connected input regions constructed from standard adversarial examples which are guaranteed to be robust to a set of real-world perturbations (such as changes in pixel intensity and geometric transformations).

Radiomic Deformation and Textural Heterogeneity (R-DepTH) Descriptor to characterize Tumor Field Effect: Application to Survival Prediction in Glioblastoma

no code implementations12 Mar 2021 Marwa Ismail, Prateek Prasanna, Kaustav Bera, Volodymyr Statsevych, Virginia Hill, Gagandeep Singh, Sasan Partovi, Niha Beig, Sean McGarry, Peter Laviolette, Manmeet Ahluwalia, Anant Madabhushi, Pallavi Tiwari

Our work is based on the rationale that highly aggressive tumors tend to grow uncontrollably, leading to pronounced biomechanical tissue deformations in the normal parenchyma, which when combined with local morphological differences in the tumor confines on MRI scans, will comprehensively capture tumor field effect.

Survival Prediction

Attention based CNN-LSTM Network for Pulmonary Embolism Prediction on Chest Computed Tomography Pulmonary Angiograms

no code implementations13 Jul 2021 Sudhir Suman, Gagandeep Singh, Nicole Sakla, Rishabh Gattu, Jeremy Green, Tej Phatak, Dimitris Samaras, Prateek Prasanna

In this study we propose a two-stage attention-based CNN-LSTM network for predicting PE, its associated type (chronic, acute) and corresponding location (leftsided, rightsided or central) on computed tomography (CT) examinations.

Computed Tomography (CT)

Practical Adversarial Attacks on Brain--Computer Interfaces

no code implementations29 Sep 2021 Rodolfo Octavio Siller Quintanilla, Xiaying Wang, Michael Hersche, Luca Benini, Gagandeep Singh

We propose new methods to induce denial-of-service attacks and incorporate domain-specific insights and constraints to accomplish two key goals: (i) create smooth adversarial attacks that are physiologically plausible; (ii) consider the realistic case where the attack happens at the origin of the signal acquisition and it propagates on the human head.

EEG

Language Modelling via Learning to Rank

no code implementations13 Oct 2021 Arvid Frydenlund, Gagandeep Singh, Frank Rudzicz

We also develop a method using $N$-grams to create a non-probabilistic teacher which generates the ranks without the need of a pre-trained LM.

Knowledge Distillation Language Modelling +2

Robust Universal Adversarial Perturbations

no code implementations22 Jun 2022 Changming Xu, Gagandeep Singh

We further show that by using a set of primitive transformations our method can generalize well to unseen transformations such as fog, JPEG compression, etc.

Provable Defense Against Geometric Transformations

1 code implementation22 Jul 2022 Rem Yang, Jacob Laurel, Sasa Misailovic, Gagandeep Singh

Geometric image transformations that arise in the real world, such as scaling and rotation, have been shown to easily deceive deep neural networks (DNNs).

Autonomous Driving

LEAPER: Fast and Accurate FPGA-based System Performance Prediction via Transfer Learning

no code implementations22 Aug 2022 Gagandeep Singh, Dionysios Diamantopoulos, Juan Gómez-Luna, Sander Stuijk, Henk Corporaal, Onur Mutlu

The key idea of LEAPER is to transfer an ML-based performance and resource usage model trained for a low-end edge environment to a new, high-end cloud environment to provide fast and accurate predictions for accelerator implementation.

Design Synthesis Transfer Learning

Interpreting Robustness Proofs of Deep Neural Networks

no code implementations31 Jan 2023 Debangshu Banerjee, Avaljot Singh, Gagandeep Singh

In recent years numerous methods have been developed to formally verify the robustness of deep neural networks (DNNs).

Black-Box Targeted Reward Poisoning Attack Against Online Deep Reinforcement Learning

no code implementations18 May 2023 Yinglun Xu, Gagandeep Singh

We leverage a general framework and find conditions to ensure efficient attack under a general assumption of the learning algorithms.

reinforcement-learning

Efficient Two-Phase Offline Deep Reinforcement Learning from Preference Feedback

no code implementations30 Dec 2023 Yinglun Xu, Gagandeep Singh

Our method ignores such state-actions during the second learning phase to achieve higher learning efficiency.

reinforcement-learning

RAMP: Boosting Adversarial Robustness Against Multiple $l_p$ Perturbations

1 code implementation9 Feb 2024 Enyi Jiang, Gagandeep Singh

For training from scratch, \textbf{RAMP} achieves SOTA union accuracy of $44. 6\%$ and relatively good clean accuracy of $81. 2\%$ on ResNet-18 against AutoAttack on CIFAR-10.

Adversarial Robustness

Reward Poisoning Attack Against Offline Reinforcement Learning

no code implementations15 Feb 2024 Yinglun Xu, Rohan Gumaste, Gagandeep Singh

To the best of our knowledge, we propose the first black-box reward poisoning attack in the general offline RL setting.

Offline RL reinforcement-learning

QuaCer-C: Quantitative Certification of Knowledge Comprehension in LLMs

no code implementations24 Feb 2024 Isha Chaudhary, Vedaant V. Jain, Gagandeep Singh

Large Language Models (LLMs) have demonstrated impressive performance on several benchmarks.

Cannot find the paper you are looking for? You can Submit a new open access paper.