4 code implementations • 31 May 2024 • Zhouxing Shi, Qirui Jin, Zico Kolter, Suman Jana, Cho-Jui Hsieh, huan zhang
GenBaB is part of the latest $\alpha,\!\beta$-CROWN, the winner of the 4th and the 5th International Verification of Neural Networks Competition (VNN-COMP 2023 and 2024).
1 code implementation • 10 Jan 2024 • Yue Huang, Lichao Sun, Haoran Wang, Siyuan Wu, Qihui Zhang, Yuan Li, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, Zhengliang Liu, Yixin Liu, Yijue Wang, Zhikun Zhang, Bertie Vidgen, Bhavya Kailkhura, Caiming Xiong, Chaowei Xiao, Chunyuan Li, Eric Xing, Furong Huang, Hao liu, Heng Ji, Hongyi Wang, huan zhang, Huaxiu Yao, Manolis Kellis, Marinka Zitnik, Meng Jiang, Mohit Bansal, James Zou, Jian Pei, Jian Liu, Jianfeng Gao, Jiawei Han, Jieyu Zhao, Jiliang Tang, Jindong Wang, Joaquin Vanschoren, John Mitchell, Kai Shu, Kaidi Xu, Kai-Wei Chang, Lifang He, Lifu Huang, Michael Backes, Neil Zhenqiang Gong, Philip S. Yu, Pin-Yu Chen, Quanquan Gu, ran Xu, Rex Ying, Shuiwang Ji, Suman Jana, Tianlong Chen, Tianming Liu, Tianyi Zhou, William Wang, Xiang Li, Xiangliang Zhang, Xiao Wang, Xing Xie, Xun Chen, Xuyu Wang, Yan Liu, Yanfang Ye, Yinzhi Cao, Yong Chen, Yue Zhao
This paper introduces TrustLLM, a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions.
1 code implementation • 21 Oct 2023 • Marcus J. Min, Yangruibo Ding, Luca Buratti, Saurabh Pujar, Gail Kaiser, Suman Jana, Baishakhi Ray
In this paper, we first formally define the self-consistency of Code LLMs and then design a framework, IdentityChain, which effectively and efficiently evaluates the self-consistency and conventional accuracy of a model at the same time.
1 code implementation • 19 Oct 2023 • Chong Xiang, Tong Wu, Sihui Dai, Jonathan Petit, Suman Jana, Prateek Mittal
State-of-the-art defenses against adversarial patch attacks can now achieve strong certifiable robustness with a marginal drop in model utility.
no code implementations • 7 Aug 2023 • Kexin Pei, Weichen Li, Qirui Jin, Shuyang Liu, Scott Geng, Lorenzo Cavallaro, Junfeng Yang, Suman Jana
This paper tackles the challenge of teaching code semantics to Large Language Models (LLMs) for program analysis by incorporating code symmetries into the model architecture.
no code implementations • 4 Oct 2022 • Kexin Pei, Dongdong She, Michael Wang, Scott Geng, Zhou Xuan, Yaniv David, Junfeng Yang, Suman Jana, Baishakhi Ray
Notably, NeuDep also outperforms the current state-of-the-art on these tasks.
4 code implementations • 11 Aug 2022 • huan zhang, Shiqi Wang, Kaidi Xu, Linyi Li, Bo Li, Suman Jana, Cho-Jui Hsieh, J. Zico Kolter
Our generalized bound propagation method, GCP-CROWN, opens up the opportunity to apply general cutting plane methods for neural network verification while benefiting from the efficiency and GPU acceleration of bound propagation methods.
no code implementations • 29 Sep 2021 • huan zhang, Shiqi Wang, Kaidi Xu, Yihan Wang, Suman Jana, Cho-Jui Hsieh, J Zico Kolter
In this work, we formulate an adversarial attack using a branch-and-bound (BaB) procedure on ReLU neural networks and search adversarial examples in the activation space corresponding to binary variables in a mixed integer programming (MIP) formulation.
no code implementations • 18 Jun 2021 • Suyoung Lee, Wonho Song, Suman Jana, Meeyoung Cha, Sooel Son
Trigger set-based watermarking schemes have gained emerging attention as they provide a means to prove ownership for deep neural network model owners.
no code implementations • NeurIPS 2021 • Shiqi Wang, huan zhang, Kaidi Xu, Xue Lin, Suman Jana, Cho-Jui Hsieh, J Zico Kolter
We develop $\beta$-CROWN, a new bound propagation based method that can fully encode neuron split constraints in branch-and-bound (BaB) based complete verification via optimizable parameters $\beta$.
1 code implementation • 24 May 2021 • Yizheng Chen, Shiqi Wang, Yue Qin, Xiaojing Liao, Suman Jana, David Wagner
Since data distribution shift is very common in security applications, e. g., often observed for malware detection, local robustness cannot guarantee that the property holds for unseen inputs at the time of deploying the classifier.
5 code implementations • NeurIPS 2021 • Shiqi Wang, huan zhang, Kaidi Xu, Xue Lin, Suman Jana, Cho-Jui Hsieh, J. Zico Kolter
Compared to the typically tightest but very costly semidefinite programming (SDP) based incomplete verifiers, we obtain higher verified accuracy with three orders of magnitudes less verification time.
2 code implementations • 16 Dec 2020 • Kexin Pei, Zhou Xuan, Junfeng Yang, Suman Jana, Baishakhi Ray
We thus train the model to learn execution semantics from the functions' micro-traces, without any manual labeling effort.
4 code implementations • ICLR 2021 • Kaidi Xu, huan zhang, Shiqi Wang, Yihan Wang, Suman Jana, Xue Lin, Cho-Jui Hsieh
Formal verification of neural networks (NNs) is a challenging and important problem.
1 code implementation • 2 Oct 2020 • Kexin Pei, Jonas Guan, David Williams-King, Junfeng Yang, Suman Jana
We present XDA, a transfer-learning-based disassembly framework that learns different contextual dependencies present in machine code and transfers this knowledge for accurate and robust disassembly.
2 code implementations • NeurIPS 2020 • Debmalya Mandal, Samuel Deng, Suman Jana, Jeannette M. Wing, Daniel Hsu
In this work, we develop classifiers that are fair not only with respect to the training distribution, but also for a class of distributions that are weighted perturbations of the training samples.
no code implementations • 4 Jun 2020 • Bai Li, Shiqi Wang, Suman Jana, Lawrence Carin
Current neural-network-based classifiers are susceptible to adversarial examples.
1 code implementation • 25 May 2020 • Dongdong She, Rahul Krishna, Lu Yan, Suman Jana, Baishakhi Ray
The compact embedding can be used to guide the mutation process effectively by focusing most of the mutations on the parts of the embedding where the gradient is high.
Software Engineering
1 code implementation • 17 Mar 2020 • Jianan Yao, Gabriel Ryan, Justin Wong, Suman Jana, Ronghui Gu
In this paper, we introduce a new neural architecture for general SMT learning, the Gated Continuous Logic Network (G-CLN), and apply it to nonlinear loop invariant learning.
1 code implementation • 6 Mar 2020 • Bai Li, Shiqi Wang, Yunhan Jia, Yantao Lu, Zhenyu Zhong, Lawrence Carin, Suman Jana
Recent research has proposed the lottery ticket hypothesis, suggesting that for a deep neural network, there exist trainable sub-networks performing equally or better than the original model with commensurate training steps.
4 code implementations • NeurIPS 2020 • Vikash Sehwag, Shiqi Wang, Prateek Mittal, Suman Jana
We demonstrate that our approach, titled HYDRA, achieves compressed networks with state-of-the-art benign and robust accuracy, simultaneously.
2 code implementations • 3 Dec 2019 • Yizheng Chen, Shiqi Wang, Weifan Jiang, Asaf Cidon, Suman Jana
There are various costs for attackers to manipulate the features of security classifiers.
1 code implementation • ICLR 2020 • Gabriel Ryan, Justin Wong, Jianan Yao, Ronghui Gu, Suman Jana
We use CLNs to implement a new inference system for loop invariants, CLN2INV, that significantly outperforms existing approaches on the popular Code2Inv dataset.
no code implementations • 8 Sep 2019 • Gabriel Ryan, Abhishek Shah, Dongdong She, Koustubha Bhat, Suman Jana
Dataflow tracking with Dynamic Taint Analysis (DTA) is an important method in systems security with many applications, including exploit analysis, guided fuzzing, and side-channel information leak detection.
Cryptography and Security
no code implementations • 8 Jul 2019 • Dongdong She, Yizheng Chen, Baishakhi Ray, Suman Jana
Dynamic taint analysis (DTA) is widely used by various applications to track information flow during runtime execution.
Cryptography and Security
no code implementations • 14 Jun 2019 • Vikash Sehwag, Shiqi Wang, Prateek Mittal, Suman Jana
In this work, we rigorously study the extension of network pruning strategies to preserve both benign accuracy and robustness of a network.
no code implementations • 5 Jun 2019 • Shiqi Wang, Yizheng Chen, Ahmed Abdou, Suman Jana
In this paper, we present interval attacks, a new technique to find adversarial examples to evaluate the robustness of neural networks.
1 code implementation • 6 Apr 2019 • Yizheng Chen, Shiqi Wang, Dongdong She, Suman Jana
A practically useful malware classifier must be robust against evasion attacks.
1 code implementation • 6 Nov 2018 • Shiqi Wang, Yizheng Chen, Ahmed Abdou, Suman Jana
Making neural networks robust against adversarial inputs has resulted in an arms race between new defenses and attacks.
2 code implementations • NeurIPS 2018 • Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, Suman Jana
Our approach can check different safety properties and find concrete counterexamples for networks that are 10$\times$ larger than the ones supported by existing analysis techniques.
1 code implementation • 15 Jul 2018 • Dongdong She, Kexin Pei, Dave Epstein, Junfeng Yang, Baishakhi Ray, Suman Jana
However, even state-of-the-art fuzzers are not very efficient at finding hard-to-trigger software bugs.
3 code implementations • 28 Apr 2018 • Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, Suman Jana
In this paper, we present a new direction for formally checking security properties of DNNs without using SMT solvers.
6 code implementations • 9 Feb 2018 • Mathias Lecuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, Suman Jana
Adversarial examples that fool machine learning models, particularly deep neural networks, have been a topic of intense research interest, with attacks and defenses being developed in a tight back-and-forth.
no code implementations • 5 Dec 2017 • Kexin Pei, Linjie Zhu, Yinzhi Cao, Junfeng Yang, Carl Vondrick, Suman Jana
Finally, we show that retraining using the safety violations detected by VeriVis can reduce the average number of violations up to 60. 2%.
no code implementations • 28 Aug 2017 • Theofilos Petsios, Jason Zhao, Angelos D. Keromytis, Suman Jana
When such conditions are met, an attacker can launch Denial-of-Service attacks against a vulnerable application by providing inputs that trigger the worst-case behavior.
Cryptography and Security
1 code implementation • 28 Aug 2017 • Yuchi Tian, Kexin Pei, Suman Jana, Baishakhi Ray
Most existing testing techniques for DNN-driven vehicles are heavily dependent on the manual collection of test data under different driving conditions which become prohibitively expensive as the number of test conditions increases.
3 code implementations • 18 May 2017 • Kexin Pei, Yinzhi Cao, Junfeng Yang, Suman Jana
First, we introduce neuron coverage for systematically measuring the parts of a DL system exercised by test inputs.