4 code implementations • 4 Mar 2019 • Mahyar Fazlyab, Manfred Morari, George J. Pappas
Certifying the safety or robustness of neural networks against input uncertainties and adversarial attacks is an emerging challenge in the area of safe machine learning and control.
1 code implementation • NeurIPS 2019 • Mahyar Fazlyab, Alexander Robey, Hamed Hassani, Manfred Morari, George J. Pappas
The resulting SDP can be adapted to increase either the estimation accuracy (by capturing the interaction between activation functions of different layers) or scalability (by decomposition and parallel implementation).
1 code implementation • 9 Oct 2019 • Mahyar Fazlyab, Manfred Morari, George J. Pappas
In this context, we discuss two relevant problems: (i) probabilistic safety verification, in which the goal is to find an upper bound on the probability of violating a safety specification; and (ii) confidence ellipsoid estimation, in which given a confidence ellipsoid for the input of the neural network, our goal is to compute a confidence ellipsoid for the output.
1 code implementation • 16 Apr 2020 • Haimin Hu, Mahyar Fazlyab, Manfred Morari, George J. Pappas
There has been an increasing interest in using neural networks in closed-loop control systems to improve performance and reduce computational costs for on-line implementation.
no code implementations • L4DC 2020 • Jacob H. Seidman, Mahyar Fazlyab, Victor M. Preciado, George J. Pappas
By interpreting the min-max problem as an optimal control problem, it has recently been shown that one can exploit the compositional structure of neural networks in the optimization problem to improve the training time significantly.
1 code implementation • ICLR 2021 • Priya L. Donti, Melrose Roderick, Mahyar Fazlyab, J. Zico Kolter
When designing controllers for safety-critical systems, practitioners often face a challenging tradeoff between robustness and performance.
no code implementations • 10 Dec 2020 • Navid Hashemi, Justin Ruths, Mahyar Fazlyab
Abstracting neural networks with constraints they impose on their inputs and outputs can be very useful in the analysis of neural network classifiers and to derive optimization-based algorithms for certification of stability and robustness of feedback systems involving neural networks.
no code implementations • 22 Dec 2020 • Shaoru Chen, Mahyar Fazlyab, Manfred Morari, George J. Pappas, Victor M. Preciado
By designing the learner and the verifier according to the analytic center cutting-plane method from convex optimization, we show that when the set of Lyapunov functions is full-dimensional in the parameter space, our method finds a Lyapunov function in a finite number of steps.
Optimization and Control
no code implementations • 22 Mar 2021 • Navid Hashemi, Mahyar Fazlyab, Justin Ruths
We exploit recent results in quantifying the robustness of neural networks to input variations to construct and tune a model-based anomaly detector, where the data-driven estimator model is provided by an autoregressive neural network.
no code implementations • 29 May 2021 • Youbang Sun, Mahyar Fazlyab, Shahin Shahrampour
Our numerical experiments on strongly convex problems indicate that our framework certifies superior convergence rates compared to the existing rates for distributed GD.
1 code implementation • 16 Jun 2021 • Shaoru Chen, Eric Wong, J. Zico Kolter, Mahyar Fazlyab
Analyzing the worst-case performance of deep neural networks against input perturbations amounts to solving a large-scale non-convex optimization problem, for which several past works have proposed convex relaxations as a promising alternative.
no code implementations • 2 Oct 2021 • Shaoru Chen, Mahyar Fazlyab, Manfred Morari, George J. Pappas, Victor M. Preciado
Estimating the region of attraction (ROA) of general nonlinear autonomous systems remains a challenging problem and requires a case-by-case analysis.
no code implementations • 18 Jul 2022 • Liangzu Peng, Mahyar Fazlyab, René Vidal
To induce robustness against outliers for rotation search, prior work considers truncated least-squares (TLS), which is a non-convex optimization problem, and its semidefinite relaxation (SDR) as a tractable alternative.
no code implementations • 23 Sep 2022 • Shaoru Chen, Victor M. Preciado, Mahyar Fazlyab
The arising application of neural networks (NN) in robotic systems has driven the development of safety verification methods for neural network dynamical systems (NNDS).
1 code implementation • 1 Nov 2022 • Taha Entesari, Sina Sharifi, Mahyar Fazlyab
We propose a novel Branch-and-Bound method for reachability analysis of neural networks in both open-loop and closed-loop settings.
1 code implementation • 14 Dec 2022 • Taha Entesari, Mahyar Fazlyab
Over-approximating the reachable sets of dynamical systems is a fundamental problem in safety verification and robust control synthesis.
no code implementations • 27 Jan 2023 • Tianqi Cui, Thomas Bertalan, George J. Pappas, Manfred Morari, Ioannis G. Kevrekidis, Mahyar Fazlyab
Neural networks are known to be vulnerable to adversarial attacks, which are small, imperceptible perturbations that can significantly alter the network's output.
1 code implementation • NeurIPS 2023 • Mahyar Fazlyab, Taha Entesari, Aniket Roy, Rama Chellappa
As a result, there has been an increasing interest in developing training procedures that can directly manipulate the decision boundary in the input space.
1 code implementation • 11 Jan 2024 • Shaoru Chen, Mahyar Fazlyab
Control Barrier Functions (CBFs) provide an elegant framework for designing safety filters for nonlinear control systems by constraining their trajectories to an invariant subset of a prespecified safe set.
no code implementations • 12 Mar 2024 • Shaoru Chen, Lekan Molu, Mahyar Fazlyab
With a convex formulation of the barrier function synthesis, we propose to first learn an empirically well-behaved NN basis function and then apply a fine-tuning algorithm that exploits the convexity and counterexamples from the verification failure to find a valid barrier function with finite-step termination guarantees: if there exist valid barrier functions, the fine-tuning algorithm is guaranteed to find one in a finite number of iterations.
no code implementations • 13 Mar 2024 • Jiarui Wang, Mahyar Fazlyab
Crucial to our approach is the use of Zubov's Partial Differential Equation (PDE), which precisely characterizes the true region of attraction of a given control policy.
no code implementations • 18 Apr 2024 • Sina Sharifi, Taha Entesari, Bardia Safaei, Vishal M. Patel, Mahyar Fazlyab
In this work, we propose the idea of leveraging the information embedded in the gradient of the loss function during training to enable the network to not only learn a desired OOD score for each sample but also to exhibit similar behavior in a local neighborhood around each sample.