Search Results for author: Guy Katz

Found 17 papers, 7 papers with code

An Abstraction-Refinement Approach to Verifying Convolutional Neural Networks

no code implementations6 Jan 2022 Matan Ostrovsky, Clark Barrett, Guy Katz

Convolutional neural networks have gained vast popularity due to their excellent performance in the fields of computer vision, image processing, and others.

RoMA: a Method for Neural Network Robustness Measurement and Assessment

no code implementations21 Oct 2021 Natan Levy, Guy Katz

In this paper, we present a new statistical method, called Robustness Measurement and Assessment (RoMA), which can measure the expected robustness of a neural network model.

Protein Folding

Minimal Multi-Layer Modifications of Deep Neural Networks

no code implementations18 Oct 2021 Idan Refaeli, Guy Katz

The novel repair procedure implemented in 3M-DNN computes a modification to the network's weights that corrects its behavior, and attempts to minimize this change via a sequence of calls to a backend, black-box DNN verification engine.

Autonomous Driving Medical Diagnosis

Pruning and Slicing Neural Networks using Formal Verification

1 code implementation28 May 2021 Ori Lahav, Guy Katz

Our approach can produce DNNs that are significantly smaller than the original, rendering them suitable for deployment on additional kinds of systems, and even more amenable to subsequent formal verification.

Towards Scalable Verification of Deep Reinforcement Learning

1 code implementation25 May 2021 Guy Amir, Michael Schapira, Guy Katz

Deep neural networks (DNNs) have gained significant popularity in recent years, becoming the state of the art in a variety of domains.

An SMT-Based Approach for Verifying Binarized Neural Networks

1 code implementation5 Nov 2020 Guy Amir, Haoze Wu, Clark Barrett, Guy Katz

One novelty of our technique is that it allows the verification of neural networks that include both binarized and non-binarized components.

Global Optimization of Objective Functions Represented by ReLU Networks

no code implementations7 Oct 2020 Christopher A. Strong, Haoze Wu, Aleksandar Zeljić, Kyle D. Julian, Guy Katz, Clark Barrett, Mykel J. Kochenderfer

However, individual "yes or no" questions cannot answer qualitative questions such as "what is the largest error within these bounds"; the answers to these lie in the domain of optimization.

Parallelization Techniques for Verifying Neural Networks

no code implementations17 Apr 2020 Haoze Wu, Alex Ozdemir, Aleksandar Zeljić, Ahmed Irfan, Kyle Julian, Divya Gopinath, Sadjad Fouladi, Guy Katz, Corina Pasareanu, Clark Barrett

Inspired by recent successes with parallel optimization techniques for solving Boolean satisfiability, we investigate a set of strategies and heuristics that aim to leverage parallel computing to improve the scalability of neural network verification.

Verifying Recurrent Neural Networks using Invariant Inference

1 code implementation6 Apr 2020 Yuval Jacoby, Clark Barrett, Guy Katz

Deep neural networks are revolutionizing the way complex systems are developed.

An Abstraction-Based Framework for Neural Network Verification

1 code implementation31 Oct 2019 Yizhak Yisrael Elboher, Justin Gottschlich, Guy Katz

In this paper, we propose a framework that can enhance neural network verification techniques by using over-approximation to reduce the size of the network - thus making it more amenable to verification.

Simplifying Neural Networks using Formal Verification

no code implementations25 Oct 2019 Sumathi Gokulanathan, Alexander Feldsher, Adi Malca, Clark Barrett, Guy Katz

Deep neural network (DNN) verification is an emerging field, with diverse verification engines quickly becoming available.

Toward Scalable Verification for Safety-Critical Deep Networks

no code implementations18 Jan 2018 Lindsey Kuper, Guy Katz, Justin Gottschlich, Kyle Julian, Clark Barrett, Mykel Kochenderfer

The increasing use of deep neural networks for safety-critical applications, such as autonomous driving and flight control, raises concerns about their safety and reliability.

Autonomous Driving

Ground-Truth Adversarial Examples

no code implementations ICLR 2018 Nicholas Carlini, Guy Katz, Clark Barrett, David L. Dill

We demonstrate how ground truths can serve to assess the effectiveness of attack techniques, by comparing the adversarial examples produced by those attacks to the ground truths; and also of defense techniques, by computing the distance to the ground truths before and after the defense is applied, and measuring the improvement.

DeepSafe: A Data-driven Approach for Checking Adversarial Robustness in Neural Networks

no code implementations2 Oct 2017 Divya Gopinath, Guy Katz, Corina S. Pasareanu, Clark Barrett

We propose a novel approach for automatically identifying safe regions of the input space, within which the network is robust against adversarial perturbations.

Adversarial Robustness Machine Translation +1

Provably Minimally-Distorted Adversarial Examples

1 code implementation29 Sep 2017 Nicholas Carlini, Guy Katz, Clark Barrett, David L. Dill

Using this approach, we demonstrate that one of the recent ICLR defense proposals, adversarial retraining, provably succeeds at increasing the distortion required to construct adversarial examples by a factor of 4. 2.

Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks

5 code implementations3 Feb 2017 Guy Katz, Clark Barrett, David Dill, Kyle Julian, Mykel Kochenderfer

Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems.

Cannot find the paper you are looking for? You can Submit a new open access paper.