no code implementations • 28 Mar 2024 • Ravi Mangal, Nina Narodytska, Divya Gopinath, Boyue Caroline Hu, Anirban Roy, Susmit Jha, Corina Pasareanu
The analysis of vision-based deep neural networks (DNNs) is highly desirable but it is very challenging due to the difficulty of expressing formal specifications for vision tasks and the lack of efficient verification procedures.
no code implementations • 25 Mar 2024 • Weimin Lyu, Xiao Lin, Songzhu Zheng, Lu Pang, Haibin Ling, Susmit Jha, Chao Chen
Textual backdoor attacks pose significant security threats.
no code implementations • 3 Feb 2024 • Claudio Spiess, David Gros, Kunal Suresh Pai, Michael Pradel, Md Rafiqul Islam Rabin, Amin Alipour, Susmit Jha, Prem Devanbu, Toufique Ahmed
Our contributions will lead to better-calibrated decision-making in the current use of code generated by language models, and offers a framework for future research to further improve calibration methods for generative models in Software Engineering.
1 code implementation • 17 Nov 2023 • Adam D. Cobb, Brian Matejek, Daniel Elenius, Anirban Roy, Susmit Jha
Our estimator is simple to train and estimates the likelihood ratio using a single forward pass of the neural estimator.
no code implementations • 25 Oct 2023 • Hassen Saidi, Susmit Jha, Tuhin Sahai
As artificial intelligence (AI) gains greater adoption in a wide variety of applications, it has immense potential to contribute to mathematical discovery, by guiding conjecture generation, constructing counterexamples, assisting in formalizing mathematics, and discovering connections between different mathematical areas, to name a few.
no code implementations • 28 Sep 2023 • Sumit Kumar Jha, Susmit Jha, Patrick Lincoln, Nathaniel D. Bastian, Alvaro Velasquez, Rickard Ewetz, Sandeep Neema
We posit that we can use the satisfiability modulo theory (SMT) solvers as deductive reasoning engines to analyze the generated solutions from the LLMs, produce counterexamples when the solutions are incorrect, and provide that feedback to the LLMs exploiting the dialog capability of instruct-trained LLMs.
no code implementations • 27 Sep 2023 • Sumit Kumar Jha, Susmit Jha, Rickard Ewetz, Alvaro Velasquez
We provide an empirical demonstration of the fragility of ResNet-like models to Gaussian noise perturbations, where the model performance deteriorates sharply and its F1-score drops to near insignificance at 0. 008 with a Gaussian noise of only 0. 5 standard deviation.
no code implementations • ICCV 2023 • Indranil Sur, Karan Sikka, Matthew Walmer, Kaushik Koneripalli, Anirban Roy, Xiao Lin, Ajay Divakaran, Susmit Jha
We present a Multimodal Backdoor Defense technique TIJO (Trigger Inversion using Joint Optimization).
no code implementations • 25 Mar 2023 • Alexander M. Berenbeim, Iain J. Cruickshank, Susmit Jha, Robert H. Thomson, Nathaniel D. Bastian
Quantitative characterizations and estimations of uncertainty are of fundamental importance in optimization and decision-making processes.
no code implementations • 10 Jan 2023 • Ismail Alkhouri, Sumit Jha, Andre Beckus, George Atia, Alvaro Velasquez, Rickard Ewetz, Arvind Ramanathan, Susmit Jha
To measure the robustness of the predicted structures, we utilize (i) the root-mean-square deviation (RMSD) and (ii) the Global Distance Test (GDT) similarity measure between the predicted structure of the original sequence and the structure of its adversarially perturbed version.
1 code implementation • 11 Nov 2022 • Adam D. Cobb, Anirban Roy, Daniel Elenius, Susmit Jha
In this paper, we develop an AI Designer that synthesizes novel UAV designs.
1 code implementation • 24 Jul 2022 • Ramneet Kaur, Kaustubh Sridhar, Sangdon Park, Susmit Jha, Anirban Roy, Oleg Sokolsky, Insup Lee
Machine learning models are prone to making incorrect predictions on inputs that are far from the training distribution.
no code implementations • 6 Jul 2022 • Susmit Jha, John Rushby
Shared intentionality is a critical component in developing conscious AI agents capable of collaboration, self-reflection, deliberation, and reasoning.
no code implementations • 20 Jun 2022 • Akshayaa Magesh, Venugopal V. Veeravalli, Anirban Roy, Susmit Jha
While a number of tests for OOD detection have been proposed in prior work, a formal framework for studying this problem is lacking.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
no code implementations • 14 Feb 2022 • Edmond Cunningham, Adam Cobb, Susmit Jha
In this paper we characterize the geometric structure of flows using principal manifolds and understand the relationship between latent variables and samples using contours.
no code implementations • 11 Feb 2022 • Manoj Acharya, Anirban Roy, Kaushik Koneripalli, Susmit Jha, Christopher Kanan, Ajay Divakaran
GCRN consists of two separate graphs to predict object labels based on the contextual cues in the image: 1) a representation graph to learn object features based on the neighboring objects and 2) a context graph to explicitly capture contextual cues from the neighboring objects.
Ranked #1 on Anomaly Detection on COCO-OOC
no code implementations • 7 Jan 2022 • Ramneet Kaur, Susmit Jha, Anirban Roy, Sangdon Park, Edgar Dobriban, Oleg Sokolsky, Insup Lee
We propose the new method iDECODe, leveraging in-distribution equivariance for conformal OOD detection.
1 code implementation • CVPR 2022 • Matthew Walmer, Karan Sikka, Indranil Sur, Abhinav Shrivastava, Susmit Jha
This is challenging for the attacker as the detector can distort or ignore the visual trigger entirely, which leads to models where backdoors are over-reliant on the language trigger.
1 code implementation • ICLR 2022 • Xiaoling Hu, Xiao Lin, Michael Cogswell, Yi Yao, Susmit Jha, Chao Chen
Despite their success and popularity, deep neural networks (DNNs) are vulnerable when facing backdoor attacks.
no code implementations • 29 Sep 2021 • Adam D. Cobb, Anirban Roy, Kaushik Koneripalli, Daniel Elenius, Susmit Jha
We use deep generative models to learn a manifold of the valid design space, followed by Hamiltonian Monte Carlo (HMC) with simulated annealing to explore and optimize design over the learned manifold, producing a diverse set of optimal designs.
no code implementations • 9 Sep 2021 • Sumit Kumar Jha, Arvind Ramanathan, Rickard Ewetz, Alvaro Velasquez, Susmit Jha
We define the robustness measure for the predicted structure of a protein sequence to be the inverse of the root-mean-square distance (RMSD) in the predicted structure and the structure of its adversarially perturbed sequence.
no code implementations • 13 Aug 2021 • Ramneet Kaur, Susmit Jha, Anirban Roy, Sangdon Park, Oleg Sokolsky, Insup Lee
We demonstrate the difference in the detection ability of these techniques and propose an ensemble approach for detection of OODs as datapoints with high uncertainty (epistemic or aleatoric).
no code implementations • 29 Mar 2021 • Panagiota Kiourti, Wenchao Li, Anirban Roy, Karan Sikka, Susmit Jha
Recent studies have shown that neural networks are vulnerable to Trojan attacks, where a network is trained to respond to specially crafted trigger patterns in the inputs in specific and potentially malicious ways.
no code implementations • 23 Mar 2021 • Ramneet Kaur, Susmit Jha, Anirban Roy, Oleg Sokolsky, Insup Lee
Deep neural networks (DNNs) are known to produce incorrect predictions with very high confidence on out-of-distribution (OOD) inputs.
no code implementations • 1 Jan 2021 • Sumit Kumar Jha, Susmit Jha, Rickard Ewetz, Alvaro Velasquez
We exploit this connection and the theory of stochastic dynamical systems to construct a novel ensemble of Itô processes as a new deep learning representation that is more robust than classical residual networks.
no code implementations • 3 Dec 2020 • Karan Sikka, Indranil Sur, Susmit Jha, Anirban Roy, Ajay Divakaran
We target the problem of detecting Trojans or backdoors in DNNs.
no code implementations • 17 Sep 2020 • Sumit Kumar Jha, Susmit Jha, Rickard Ewetz, Sunny Raj, Alvaro Velasquez, Laura L. Pullum, Ananthram Swami
We present a new extension of Fano's inequality and employ it to theoretically establish that the probability of success for a membership inference attack on a deep neural network can be bounded using the mutual information between its inputs and its activations.
1 code implementation • NeurIPS 2019 • Susmit Jha, Sunny Raj, Steven Fernandes, Sumit K. Jha, Somesh Jha, Brian Jalaian, Gunjan Verma, Ananthram Swami
These experiments demonstrate the effectiveness of the ABC metric to make DNNs more trustworthy and resilient.
no code implementations • 29 Oct 2019 • Tuhin Sahai, Anurag Mishra, Jose Miguel Pasini, Susmit Jha
Given a Boolean formula $\phi(x)$ in conjunctive normal form (CNF), the density of states counts the number of variable assignments that violate exactly $e$ clauses, for all values of $e$.
no code implementations • ICLR 2020 • Uyeong Jang, Susmit Jha, Somesh Jha
These defenses rely on the assumption that data lie in a manifold of a lower dimension than the input space.
no code implementations • 14 Mar 2019 • Susmit Jha, Sunny Raj, Steven Lawrence Fernandes, Sumit Kumar Jha, Somesh Jha, Gunjan Verma, Brian Jalaian, Ananthram Swami
We study the robustness of machine learning models on benign and adversarial inputs in this neighborhood.
2 code implementations • 1 Mar 2019 • Panagiota Kiourti, Kacper Wardega, Susmit Jha, Wenchao Li
Recent work has identified that classification models implemented as neural networks are vulnerable to data-poisoning and Trojan attacks at training time.
no code implementations • 18 May 2018 • Shalini Ghosh, Amaury Mercier, Dheeraj Pichapati, Susmit Jha, Vinod Yegneswaran, Patrick Lincoln
Experiments using our first approach of a multi-headed TNN model, on a dataset generated by a customized version of TORCS, show that (1) adding safety constraints to a neural network model results in increased performance and safety, and (2) the improvement increases with increasing importance of the safety constraints.
no code implementations • NeurIPS 2018 • Marcell Vazquez-Chanlatte, Susmit Jha, Ashish Tiwari, Mark K. Ho, Sanjit A. Seshia
In this paper, we formulate the specification inference task as a maximum a posteriori (MAP) probability inference problem, apply the principle of maximum entropy to derive an analytic demonstration likelihood model and give an efficient approach to search for the most likely specification in a large candidate pool of specifications.
no code implementations • 26 Sep 2017 • Souradeep Dutta, Susmit Jha, Sriram Sanakaranarayanan, Ashish Tiwari
We demonstrate the effectiveness of the proposed approach for verification of NNs used in automated control as well as those used in classification.
no code implementations • 15 May 2015 • Susmit Jha, Sanjit A. Seshia
In this paper, we present a theoretical framework for formal inductive synthesis.
no code implementations • 21 Jul 2014 • Susmit Jha, Sanjit A. Seshia
The history bounded counterexample used in any iteration of CEGIS is bounded by the examples used in previous iterations of inductive synthesis.