Search Results for author: Sumit Kumar Jha

Found 17 papers, 2 papers with code

Circuit Partitioning Using Large Language Models for Quantum Compilation and Simulations

no code implementations12 May 2025 Pranav Sinha, Sumit Kumar Jha, Sunny Raj

Quantum circuit compilation algorithms attempt to minimize these noisy gates when mapping quantum algorithms onto quantum hardware but face computational challenges that restrict their application to circuits with no more than 5-6 qubits, necessitating the need to partition large circuits before the application of noisy quantum gate minimization algorithms.

Jailbreaking Large Language Models with Symbolic Mathematics

no code implementations17 Sep 2024 Emet Bethany, Mazal Bethany, Juan Arturo Nolazco Flores, Sumit Kumar Jha, Peyman Najafirad

Recent advancements in AI safety have led to increased efforts in training and red-teaming large language models (LLMs) to mitigate unsafe content generation.

Red Teaming

AutoSafeCoder: A Multi-Agent Framework for Securing LLM Code Generation through Static Analysis and Fuzz Testing

1 code implementation16 Sep 2024 Ana Nunez, Nafis Tanveer Islam, Sumit Kumar Jha, Peyman Najafirad

Recent advancements in automatic code generation using large language models (LLMs) have brought us closer to fully automated secure software development.

Code Generation Program Synthesis

Multi-Stain Multi-Level Convolutional Network for Multi-Tissue Breast Cancer Image Segmentation

no code implementations9 Jun 2024 Akash Modi, Sumit Kumar Jha, Purnendu Mishra, Rajiv Kumar, Kiran Aatre, Gursewak Singh, Shubham Mathur

To extrapolate stain and scanner invariance, our model was evaluated on 23000 patches which were for a completely new stain (Hematoxylin and Eosin) from a completely new scanner (Motic) from a different lab.

Image Segmentation Semantic Segmentation

Towards a Game-theoretic Understanding of Explanation-based Membership Inference Attacks

no code implementations10 Apr 2024 Kavita Kumari, Murtuza Jadliwala, Sumit Kumar Jha, Anindya Maiti

By means of a comprehensive set of simulations of the proposed game model, we assess different factors that can impact the capability of an adversary to launch MIA in such repeated interaction settings.

Neuro Symbolic Reasoning for Planning: Counterexample Guided Inductive Synthesis using Large Language Models and Satisfiability Solving

no code implementations28 Sep 2023 Sumit Kumar Jha, Susmit Jha, Patrick Lincoln, Nathaniel D. Bastian, Alvaro Velasquez, Rickard Ewetz, Sandeep Neema

We posit that we can use the satisfiability modulo theory (SMT) solvers as deductive reasoning engines to analyze the generated solutions from the LLMs, produce counterexamples when the solutions are incorrect, and provide that feedback to the LLMs exploiting the dialog capability of instruct-trained LLMs.

Hallucination Question Answering +1

Neural Stochastic Differential Equations for Robust and Explainable Analysis of Electromagnetic Unintended Radiated Emissions

no code implementations27 Sep 2023 Sumit Kumar Jha, Susmit Jha, Rickard Ewetz, Alvaro Velasquez

We provide an empirical demonstration of the fragility of ResNet-like models to Gaussian noise perturbations, where the model performance deteriorates sharply and its F1-score drops to near insignificance at 0. 008 with a Gaussian noise of only 0. 5 standard deviation.

Attribute Interpretable Machine Learning

A Game-theoretic Understanding of Repeated Explanations in ML Models

no code implementations5 Feb 2022 Kavita Kumari, Murtuza Jadliwala, Sumit Kumar Jha, Anindya Maiti

This paper formally models the strategic repeated interactions between a system, comprising of a machine learning (ML) model and associated explanation method, and an end-user who is seeking a prediction/label and its explanation for a query/input, by means of game theory.

Protein Folding Neural Networks Are Not Robust

no code implementations9 Sep 2021 Sumit Kumar Jha, Arvind Ramanathan, Rickard Ewetz, Alvaro Velasquez, Susmit Jha

We define the robustness measure for the predicted structure of a protein sequence to be the inverse of the root-mean-square distance (RMSD) in the predicted structure and the structure of its adversarially perturbed sequence.

Adversarial Attack Protein Folding

CrossedWires: A Dataset of Syntactically Equivalent but Semantically Disparate Deep Learning Models

1 code implementation29 Aug 2021 Max Zvyagin, Thomas Brettin, Arvind Ramanathan, Sumit Kumar Jha

Currently, our ability to build standardized deep learning models is limited by the availability of a suite of neural network and corresponding training hyperparameter benchmarks that expose differences between existing deep learning frameworks.

Deep Learning Hyperparameter Optimization

Robust Ensembles of Neural Networks using Itô Processes

no code implementations1 Jan 2021 Sumit Kumar Jha, Susmit Jha, Rickard Ewetz, Alvaro Velasquez

We exploit this connection and the theory of stochastic dynamical systems to construct a novel ensemble of Itô processes as a new deep learning representation that is more robust than classical residual networks.

An Extension of Fano's Inequality for Characterizing Model Susceptibility to Membership Inference Attacks

no code implementations17 Sep 2020 Sumit Kumar Jha, Susmit Jha, Rickard Ewetz, Sunny Raj, Alvaro Velasquez, Laura L. Pullum, Ananthram Swami

We present a new extension of Fano's inequality and employ it to theoretically establish that the probability of success for a membership inference attack on a deep neural network can be bounded using the mutual information between its inputs and its activations.

Inference Attack Membership Inference Attack

Quantifying Membership Inference Vulnerability via Generalization Gap and Other Model Metrics

no code implementations11 Sep 2020 Jason W. Bentley, Daniel Gibney, Gary Hoppenworth, Sumit Kumar Jha

We demonstrate how a target model's generalization gap leads directly to an effective deterministic black box membership inference attack (MIA).

Inference Attack Membership Inference Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.