no code implementations • 10 Jan 2025 • Lucas C. Cordeiro, Matthew L. Daggitt, Julien Girard-Satabin, Omri Isac, Taylor T. Johnson, Guy Katz, Ekaterina Komendantskaya, Augustin Lemesle, Edoardo Manino, Artjoms Šinkarovs, Haoze Wu
Neural network verification is a new and rapidly developing field of research.
no code implementations • 7 Aug 2024 • Guy Amir, Shahaf Bassan, Guy Katz
Our findings prove that the expressiveness of the distribution can significantly influence the overall complexity of interpretation, and identify essential prerequisites that a model must possess to generate socially aligned explanations.
1 code implementation • 9 Jul 2024 • Udayan Mandal, Guy Amir, Haoze Wu, Ieva Daukantas, Fletcher Lee Newell, Umberto Ravaioli, Baoluo Meng, Michael Durling, Kerianne Hobbs, Milan Ganai, Tobey Shim, Guy Katz, Clark Barrett
In recent years, deep reinforcement learning (DRL) approaches have generated highly successful controllers for a myriad of complex domains.
no code implementations • 1 Jul 2024 • Yizhak Y. Elboher, Avraham Raviv, Yael Leibovich Weiss, Omer Cohen, Roy Assa, Guy Katz, Hillel Kugler
Deep neural networks (DNNs) are widely used in real-world applications, yet they remain vulnerable to errors and adversarial attacks.
no code implementations • 10 Jun 2024 • Davide Corsi, Guy Amir, Andoni Rodriguez, Cesar Sanchez, Guy Katz, Roy Fox
Our approach combines both formal and probabilistic verification tools to partition the input domain into safe and unsafe regions.
no code implementations • 6 Jun 2024 • Andoni Rodriguez, Guy Amir, Davide Corsi, Cesar Sanchez, Guy Katz
To the best of our knowledge, this is the first approach for synthesizing shields for such expressivity.
no code implementations • 5 Jun 2024 • Shahaf Bassan, Guy Amir, Guy Katz
We propose a framework for bridging this gap, by using computational complexity theory to assess local and global perspectives of interpreting ML models.
no code implementations • 4 Jun 2024 • Guy Amir, Osher Maayan, Tom Zelazny, Guy Katz, Michael Schapira
Deep neural networks (DNNs) play a crucial role in the field of machine learning, demonstrating state-of-the-art performance across various application domains.
1 code implementation • 22 May 2024 • Udayan Mandal, Guy Amir, Haoze Wu, Ieva Daukantas, Fletcher Lee Newell, Umberto J. Ravaioli, Baoluo Meng, Michael Durling, Milan Ganai, Tobey Shim, Guy Katz, Clark Barrett
A promising approach for providing strong guarantees on an agent's behavior is to use Neural Lyapunov Barrier (NLB) certificates, which are learned functions over the system whose properties indirectly imply that an agent behaves as desired.
no code implementations • 17 May 2024 • Remi Desmartin, Omri Isac, Ekaterina Komendantskaya, Kathrin Stark, Grant Passmore, Guy Katz
Recent advances in the verification of deep neural networks (DNNs) have opened the way for broader usage of DNN verification technology in many application areas, including safety-critical ones.
1 code implementation • 15 Mar 2024 • Marco Casadio, Tanvi Dinkar, Ekaterina Komendantskaya, Luca Arnaboldi, Matthew L. Daggitt, Omri Isac, Guy Katz, Verena Rieser, Oliver Lemon
In this paper, we attempt to distil and evaluate general components of an NLP verification pipeline, that emerges from the progress in the field to date.
no code implementations • 7 Feb 2024 • Davide Corsi, Guy Amir, Guy Katz, Alessandro Farinelli
In recent years, Deep Reinforcement Learning (DRL) has become a popular paradigm in machine learning due to its successful applications to real-world and complex systems.
1 code implementation • 25 Jan 2024 • Haoze Wu, Omri Isac, Aleksandar Zeljić, Teruhiro Tagomori, Matthew Daggitt, Wen Kokke, Idan Refaeli, Guy Amir, Kyle Julian, Shahaf Bassan, Pei Huang, Ori Lahav, Min Wu, Min Zhang, Ekaterina Komendantskaya, Guy Katz, Clark Barrett
This paper serves as a comprehensive system description of version 2. 0 of the Marabou framework for formal analysis of neural networks.
no code implementations • 8 Jan 2024 • Yizhak Elboher, Raya Elsaleh, Omri Isac, Mélanie Ducoffe, Audrey Galametz, Guillaume Povéda, Ryma Boumazouza, Noémie Cohen, Guy Katz
As deep neural networks (DNNs) are becoming the prominent solution for many computational problems, the aviation industry seeks to explore their potential in alleviating pilot workload and in improving operational safety.
no code implementations • 4 Jan 2024 • Guy Katz, Natan Levy, Idan Refaeli, Raz Yerushalmi
Software development in the aerospace domain requires adhering to strict, high-quality standards.
no code implementations • 6 Sep 2023 • Ophir M. Carmel, Guy Katz
Further, it incurs only a very slight hit to performance, or even in some cases - improves it, while significantly reducing the frequency of undesirable behavior.
no code implementations • 31 Jul 2023 • Shahaf Bassan, Guy Amir, Davide Corsi, Idan Refaeli, Guy Katz
We evaluate our approach on two popular benchmarks from the domain of automated navigation; and observe that our methods allow the efficient computation of minimal and minimum explanations, significantly outperforming the state of the art.
no code implementations • 12 Jul 2023 • Remi Desmartin, Omri Isac, Grant Passmore, Kathrin Stark, Guy Katz, Ekaterina Komendantskaya
In this work, we present a novel implementation of a proof checker for DNN verification.
no code implementations • 29 May 2023 • Raya Elsaleh, Guy Katz
We were able to simplify many of the verification queries that trigger these faulty behaviors, by as much as 99%.
no code implementations • 11 Feb 2023 • Guy Amir, Osher Maayan, Tom Zelazny, Guy Katz, Michael Schapira
Deep neural networks (DNNs) are the workhorses of deep learning, which constitutes the state of the art in numerous application domains.
no code implementations • 27 Jan 2023 • Xingwu Guo, Ziwei Zhou, Yueling Zhang, Guy Katz, Min Zhang
The experimental results demonstrate our approach's effectiveness and efficiency in verifying DNNs' robustness against various occlusions, and its ability to generate counterexamples when these DNNs are not robust.
1 code implementation • 19 Jan 2023 • Adiel Ashrov, Guy Katz
Deep neural networks (DNNs) have become a crucial instrument in the software development toolkit, due to their ability to efficiently solve complex problems.
no code implementations • 5 Jan 2023 • Natan Levy, Raz Yerushalmi, Guy Katz
Multiple studies have demonstrated that even modern DNNs are susceptible to adversarial inputs, and this risk must thus be measured and mitigated to allow the deployment of DNNs in critical settings.
no code implementations • 6 Dec 2022 • Guy Amir, Ziv Freund, Guy Katz, Elad Mandelbaum, Idan Refaeli
In this short paper, we present our ongoing work on the veriFIRE project -- a collaboration between industry and academia, aimed at using verification for increasing the reliability of a real-world, safety-critical system.
no code implementations • 21 Nov 2022 • Jiaxu Tian, Dapeng Zhi, Si Liu, Peixin Wang, Guy Katz, Min Zhang
The experimental results on a wide range of benchmarks show that the DNNs trained by using our approach exhibit comparable performance, while the reachability analysis of the corresponding systems becomes more amenable with significant tightness and efficiency improvement over the state-of-the-art white-box approaches.
no code implementations • 16 Nov 2022 • Avriti Chauhan, Mohammad Afzal, Hrishikesh Karmarkar, Yizhak Elboher, Kumar Madhukar, Guy Katz
Deep Neural Networks (DNNs) are everywhere, frequently performing a fairly complex task that used to be unimaginable for machines to carry out.
no code implementations • 25 Oct 2022 • Shahaf Bassan, Guy Katz
We (1) suggest an efficient, verification-based method for finding minimal explanations, which constitute a provable approximation of the global, minimum explanation; (2) show how DNN verification can assist in calculating lower and upper bounds on the optimal explanation; (3) propose heuristics that significantly improve the scalability of the verification process; and (4) suggest the use of bundles, which allows us to arrive at more succinct and interpretable explanations.
1 code implementation • 23 Oct 2022 • Elazar Cohen, Yizhak Yisrael Elboher, Clark Barrett, Guy Katz
Recent attempts have demonstrated that abstraction-refinement approaches could play a significant role in mitigating these limitations; but these approaches can often produce networks that are so abstract, that they become unsuitable for verification.
no code implementations • 16 Aug 2022 • Tom Zelazny, Haoze Wu, Clark Barrett, Guy Katz
A key component in many state-of-the-art verification schemes is computing lower and upper bounds on the values that neurons in the network can obtain for a specific input domain -- and the tighter these bounds, the more likely the verification is to succeed.
no code implementations • 5 Aug 2022 • Yizhak Yisrael Elboher, Elazar Cohen, Guy Katz
Recent work has proposed enhancing such verification techniques with abstraction-refinement capabilities, which have been shown to boost scalability: instead of verifying a large and complex network, the verifier constructs and then verifies a much smaller network, whose correctness implies the correctness of the original network.
no code implementations • 20 Jun 2022 • Davide Corsi, Raz Yerushalmi, Guy Amir, Alessandro Farinelli, David Harel, Guy Katz
Deep reinforcement learning (DRL) has achieved groundbreaking successes in a wide variety of robotic applications.
no code implementations • 1 Jun 2022 • Omri Isac, Clark Barrett, Min Zhang, Guy Katz
In this work, we present a novel mechanism for enhancing Simplex-based DNN verifiers with proof production capabilities: the generation of an easy-to-check witness of unsatisfiability, which attests to the absence of errors.
no code implementations • 26 May 2022 • Guy Amir, Davide Corsi, Raz Yerushalmi, Luca Marzari, David Harel, Alessandro Farinelli, Guy Katz
Our work is the first to establish the usefulness of DNN verification in identifying and filtering out suboptimal DRL policies in real-world robots, and we believe that the methods presented here are applicable to a wide range of systems that incorporate deep-learning-based agents.
3 code implementations • 19 Mar 2022 • Haoze Wu, Aleksandar Zeljić, Guy Katz, Clark Barrett
Given a convex relaxation which over-approximates the non-convex activation functions, we encode the violations of activation functions as a cost function and optimize it with respect to the convex relaxation.
no code implementations • 9 Feb 2022 • Raz Yerushalmi, Guy Amir, Achiya Elyasaf, David Harel, Guy Katz, Assaf Marron
In this work-in-progress report, we propose a technique for enhancing the reinforcement learning training process (specifically, its reward calculation), in a way that allows human engineers to directly contribute their expert knowledge, making the agent under training more likely to comply with various relevant constraints.
no code implementations • 8 Feb 2022 • Guy Amir, Tom Zelazny, Guy Katz, Michael Schapira
Deep neural networks (DNNs) have become the technology of choice for realizing a variety of complex tasks.
no code implementations • 6 Jan 2022 • Matan Ostrovsky, Clark Barrett, Guy Katz
Convolutional neural networks have gained vast popularity due to their excellent performance in the fields of computer vision, image processing, and others.
no code implementations • 21 Oct 2021 • Natan Levy, Guy Katz
In this paper, we present a new statistical method, called Robustness Measurement and Assessment (RoMA), which can measure the expected robustness of a neural network model.
no code implementations • 18 Oct 2021 • Idan Refaeli, Guy Katz
The novel repair procedure implemented in 3M-DNN computes a modification to the network's weights that corrects its behavior, and attempts to minimize this change via a sequence of calls to a backend, black-box DNN verification engine.
1 code implementation • 28 May 2021 • Ori Lahav, Guy Katz
Our approach can produce DNNs that are significantly smaller than the original, rendering them suitable for deployment on additional kinds of systems, and even more amenable to subsequent formal verification.
1 code implementation • 25 May 2021 • Guy Amir, Michael Schapira, Guy Katz
Deep neural networks (DNNs) have gained significant popularity in recent years, becoming the state of the art in a variety of domains.
1 code implementation • 3 Apr 2021 • Marco Casadio, Ekaterina Komendantskaya, Matthew L. Daggitt, Wen Kokke, Guy Katz, Guy Amir, Idan Refaeli
Neural networks are very successful at detecting patterns in noisy data, and have become the technology of choice in many fields.
1 code implementation • 5 Nov 2020 • Guy Amir, Haoze Wu, Clark Barrett, Guy Katz
One novelty of our technique is that it allows the verification of neural networks that include both binarized and non-binarized components.
no code implementations • 7 Oct 2020 • Christopher A. Strong, Haoze Wu, Aleksandar Zeljić, Kyle D. Julian, Guy Katz, Clark Barrett, Mykel J. Kochenderfer
However, individual "yes or no" questions cannot answer qualitative questions such as "what is the largest error within these bounds"; the answers to these lie in the domain of optimization.
no code implementations • 17 Apr 2020 • Haoze Wu, Alex Ozdemir, Aleksandar Zeljić, Ahmed Irfan, Kyle Julian, Divya Gopinath, Sadjad Fouladi, Guy Katz, Corina Pasareanu, Clark Barrett
Inspired by recent successes with parallel optimization techniques for solving Boolean satisfiability, we investigate a set of strategies and heuristics that aim to leverage parallel computing to improve the scalability of neural network verification.
1 code implementation • 6 Apr 2020 • Yuval Jacoby, Clark Barrett, Guy Katz
Deep neural networks are revolutionizing the way complex systems are developed.
1 code implementation • 31 Oct 2019 • Yizhak Yisrael Elboher, Justin Gottschlich, Guy Katz
In this paper, we propose a framework that can enhance neural network verification techniques by using over-approximation to reduce the size of the network - thus making it more amenable to verification.
no code implementations • 25 Oct 2019 • Sumathi Gokulanathan, Alexander Feldsher, Adi Malca, Clark Barrett, Guy Katz
Deep neural network (DNN) verification is an emerging field, with diverse verification engines quickly becoming available.
no code implementations • 18 Jan 2018 • Lindsey Kuper, Guy Katz, Justin Gottschlich, Kyle Julian, Clark Barrett, Mykel Kochenderfer
The increasing use of deep neural networks for safety-critical applications, such as autonomous driving and flight control, raises concerns about their safety and reliability.
no code implementations • ICLR 2018 • Nicholas Carlini, Guy Katz, Clark Barrett, David L. Dill
We demonstrate how ground truths can serve to assess the effectiveness of attack techniques, by comparing the adversarial examples produced by those attacks to the ground truths; and also of defense techniques, by computing the distance to the ground truths before and after the defense is applied, and measuring the improvement.
no code implementations • 2 Oct 2017 • Divya Gopinath, Guy Katz, Corina S. Pasareanu, Clark Barrett
We propose a novel approach for automatically identifying safe regions of the input space, within which the network is robust against adversarial perturbations.
1 code implementation • 29 Sep 2017 • Nicholas Carlini, Guy Katz, Clark Barrett, David L. Dill
Using this approach, we demonstrate that one of the recent ICLR defense proposals, adversarial retraining, provably succeeds at increasing the distortion required to construct adversarial examples by a factor of 4. 2.
no code implementations • 8 Sep 2017 • Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer
Autonomous vehicles are highly complex systems, required to function reliably in a wide variety of situations.
9 code implementations • 3 Feb 2017 • Guy Katz, Clark Barrett, David Dill, Kyle Julian, Mykel Kochenderfer
Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems.