1 code implementation • 13 Mar 2023 • Ying Sheng, Lianmin Zheng, Binhang Yuan, Zhuohan Li, Max Ryabinin, Daniel Y. Fu, Zhiqiang Xie, Beidi Chen, Clark Barrett, Joseph E. Gonzalez, Percy Liang, Christopher Ré, Ion Stoica, Ce Zhang
As a result, when running OPT-175B on a single 16GB GPU, FlexGen achieves significantly higher throughput compared to state-of-the-art offloading systems, reaching a generation throughput of 1 token/s for the first time with an effective batch size of 144.
1 code implementation • 3 Mar 2023 • Dennis Wei, Haoze Wu, Min Wu, Pin-Yu Chen, Clark Barrett, Eitan Farchi
The softmax function is a ubiquitous component at the output of neural networks and increasingly in intermediate layers as well.
no code implementations • 2 Dec 2022 • Min Wu, Haoze Wu, Clark Barrett
We present VeriX, a system for producing optimal robust explanations and generating counterfactuals along decision boundaries of machine learning models.
no code implementations • 23 Oct 2022 • Elazar Cohen, Yizhak Yisrael Elboher, Clark Barrett, Guy Katz
Recent attempts have demonstrated that abstraction-refinement approaches could play a significant role in mitigating these limitations; but these approaches can often produce networks that are so abstract, that they become unsuitable for verification.
no code implementations • 16 Aug 2022 • Tom Zelazny, Haoze Wu, Clark Barrett, Guy Katz
A key component in many state-of-the-art verification schemes is computing lower and upper bounds on the values that neurons in the network can obtain for a specific input domain -- and the tighter these bounds, the more likely the verification is to succeed.
no code implementations • 8 Jun 2022 • Haoze Wu, Teruhiro Tagomori, Alexander Robey, Fengjun Yang, Nikolai Matni, George Pappas, Hamed Hassani, Corina Pasareanu, Clark Barrett
We consider the problem of certifying the robustness of deep neural networks against real-world distribution shifts.
no code implementations • 1 Jun 2022 • Omri Isac, Clark Barrett, Min Zhang, Guy Katz
In this work, we present a novel mechanism for enhancing Simplex-based DNN verifiers with proof production capabilities: the generation of an easy-to-check witness of unsatisfiability, which attests to the absence of errors.
1 code implementation • 19 Mar 2022 • Haoze Wu, Aleksandar Zeljić, Guy Katz, Clark Barrett
Given a convex relaxation which over-approximates the non-convex activation functions, we encode the violations of activation functions as a cost function and optimize it with respect to the convex relaxation.
1 code implementation • 7 Mar 2022 • Haoze Wu, Clark Barrett, Mahmood Sharif, Nina Narodytska, Gagandeep Singh
Recently, Graph Neural Networks (GNNs) have been applied for scheduling jobs over clusters, achieving better performance than hand-crafted heuristics.
no code implementations • 6 Jan 2022 • Matan Ostrovsky, Clark Barrett, Guy Katz
Convolutional neural networks have gained vast popularity due to their excellent performance in the fields of computer vision, image processing, and others.
no code implementations • 2 Mar 2021 • Colin Paterson, Haoze Wu, John Grese, Radu Calinescu, Corina S. Pasareanu, Clark Barrett
We introduce DeepCert, a tool-supported method for verifying the robustness of deep neural network (DNN) image classifiers to contextually relevant perturbations such as blur, haze, and changes in image contrast.
1 code implementation • 5 Nov 2020 • Guy Amir, Haoze Wu, Clark Barrett, Guy Katz
One novelty of our technique is that it allows the verification of neural networks that include both binarized and non-binarized components.
no code implementations • 7 Oct 2020 • Christopher A. Strong, Haoze Wu, Aleksandar Zeljić, Kyle D. Julian, Guy Katz, Clark Barrett, Mykel J. Kochenderfer
However, individual "yes or no" questions cannot answer qualitative questions such as "what is the largest error within these bounds"; the answers to these lie in the domain of optimization.
no code implementations • 17 Apr 2020 • Haoze Wu, Alex Ozdemir, Aleksandar Zeljić, Ahmed Irfan, Kyle Julian, Divya Gopinath, Sadjad Fouladi, Guy Katz, Corina Pasareanu, Clark Barrett
Inspired by recent successes with parallel optimization techniques for solving Boolean satisfiability, we investigate a set of strategies and heuristics that aim to leverage parallel computing to improve the scalability of neural network verification.
1 code implementation • 6 Apr 2020 • Yuval Jacoby, Clark Barrett, Guy Katz
Deep neural networks are revolutionizing the way complex systems are developed.
1 code implementation • NeurIPS 2019 • Jiaxuan You, Haoze Wu, Clark Barrett, Raghuram Ramanujan, Jure Leskovec
The Boolean Satisfiability (SAT) problem is the canonical NP-complete problem and is fundamental to computer science, with a wide array of applications in planning, verification, and theorem proving.
no code implementations • 25 Oct 2019 • Sumathi Gokulanathan, Alexander Feldsher, Adi Malca, Clark Barrett, Guy Katz
Deep neural network (DNN) verification is an emerging field, with diverse verification engines quickly becoming available.
2 code implementations • 15 Mar 2019 • Changliu Liu, Tomer Arnon, Christopher Lazarus, Clark Barrett, Mykel J. Kochenderfer
Deep neural networks are widely used for nonlinear function approximation with applications ranging from computer vision to control.
no code implementations • 18 Jan 2018 • Lindsey Kuper, Guy Katz, Justin Gottschlich, Kyle Julian, Clark Barrett, Mykel Kochenderfer
The increasing use of deep neural networks for safety-critical applications, such as autonomous driving and flight control, raises concerns about their safety and reliability.
no code implementations • ICLR 2018 • Nicholas Carlini, Guy Katz, Clark Barrett, David L. Dill
We demonstrate how ground truths can serve to assess the effectiveness of attack techniques, by comparing the adversarial examples produced by those attacks to the ground truths; and also of defense techniques, by computing the distance to the ground truths before and after the defense is applied, and measuring the improvement.
no code implementations • 2 Oct 2017 • Divya Gopinath, Guy Katz, Corina S. Pasareanu, Clark Barrett
We propose a novel approach for automatically identifying safe regions of the input space, within which the network is robust against adversarial perturbations.
1 code implementation • 29 Sep 2017 • Nicholas Carlini, Guy Katz, Clark Barrett, David L. Dill
Using this approach, we demonstrate that one of the recent ICLR defense proposals, adversarial retraining, provably succeeds at increasing the distortion required to construct adversarial examples by a factor of 4. 2.
no code implementations • 8 Sep 2017 • Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, Mykel J. Kochenderfer
Autonomous vehicles are highly complex systems, required to function reliably in a wide variety of situations.
7 code implementations • 3 Feb 2017 • Guy Katz, Clark Barrett, David Dill, Kyle Julian, Mykel Kochenderfer
Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems.