Search Results for author: Sumit Kumar Jha

Found 6 papers, 1 papers with code

Protein Folding Neural Networks Are Not Robust

no code implementations9 Sep 2021 Sumit Kumar Jha, Arvind Ramanathan, Rickard Ewetz, Alvaro Velasquez, Susmit Jha

We define the robustness measure for the predicted structure of a protein sequence to be the inverse of the root-mean-square distance (RMSD) in the predicted structure and the structure of its adversarially perturbed sequence.

Adversarial Attack Protein Folding

CrossedWires: A Dataset of Syntactically Equivalent but Semantically Disparate Deep Learning Models

1 code implementation29 Aug 2021 Max Zvyagin, Thomas Brettin, Arvind Ramanathan, Sumit Kumar Jha

Currently, our ability to build standardized deep learning models is limited by the availability of a suite of neural network and corresponding training hyperparameter benchmarks that expose differences between existing deep learning frameworks.

Hyperparameter Optimization

Robust Ensembles of Neural Networks using Itô Processes

no code implementations1 Jan 2021 Sumit Kumar Jha, Susmit Jha, Rickard Ewetz, Alvaro Velasquez

We exploit this connection and the theory of stochastic dynamical systems to construct a novel ensemble of Itô processes as a new deep learning representation that is more robust than classical residual networks.

An Extension of Fano's Inequality for Characterizing Model Susceptibility to Membership Inference Attacks

no code implementations17 Sep 2020 Sumit Kumar Jha, Susmit Jha, Rickard Ewetz, Sunny Raj, Alvaro Velasquez, Laura L. Pullum, Ananthram Swami

We present a new extension of Fano's inequality and employ it to theoretically establish that the probability of success for a membership inference attack on a deep neural network can be bounded using the mutual information between its inputs and its activations.

Inference Attack Membership Inference Attack

Quantifying Membership Inference Vulnerability via Generalization Gap and Other Model Metrics

no code implementations11 Sep 2020 Jason W. Bentley, Daniel Gibney, Gary Hoppenworth, Sumit Kumar Jha

We demonstrate how a target model's generalization gap leads directly to an effective deterministic black box membership inference attack (MIA).

Inference Attack Membership Inference Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.