no code implementations • 27 Jan 2023 • Tianqi Cui, Thomas Bertalan, George J. Pappas, Manfred Morari, Ioannis G. Kevrekidis, Mahyar Fazlyab
Neural networks are known to be vulnerable to adversarial attacks, which are small, imperceptible perturbations that can significantly alter the network's output.
1 code implementation • 14 Dec 2022 • Taha Entesari, Mahyar Fazlyab
Over-approximating the reachable sets of dynamical systems is a fundamental problem in safety verification and robust control synthesis.
1 code implementation • 1 Nov 2022 • Taha Entesari, Sina Sharifi, Mahyar Fazlyab
We propose a novel Branch-and-Bound method for reachability analysis of neural networks in both open-loop and closed-loop settings.
no code implementations • 23 Sep 2022 • Shaoru Chen, Victor M. Preciado, Mahyar Fazlyab
The arising application of neural networks (NN) in robotic systems has driven the development of safety verification methods for neural network dynamical systems (NNDS).
no code implementations • 18 Jul 2022 • Liangzu Peng, Mahyar Fazlyab, René Vidal
To induce robustness against outliers for rotation search, prior work considers truncated least-squares (TLS), which is a non-convex optimization problem, and its semidefinite relaxation (SDR) as a tractable alternative.
no code implementations • 2 Oct 2021 • Shaoru Chen, Mahyar Fazlyab, Manfred Morari, George J. Pappas, Victor M. Preciado
Estimating the region of attraction (ROA) of general nonlinear autonomous systems remains a challenging problem and requires a case-by-case analysis.
1 code implementation • 16 Jun 2021 • Shaoru Chen, Eric Wong, J. Zico Kolter, Mahyar Fazlyab
Analyzing the worst-case performance of deep neural networks against input perturbations amounts to solving a large-scale non-convex optimization problem, for which several past works have proposed convex relaxations as a promising alternative.
no code implementations • 29 May 2021 • Youbang Sun, Mahyar Fazlyab, Shahin Shahrampour
Our numerical experiments on strongly convex problems indicate that our framework certifies superior convergence rates compared to the existing rates for distributed GD.
no code implementations • 22 Mar 2021 • Navid Hashemi, Mahyar Fazlyab, Justin Ruths
We exploit recent results in quantifying the robustness of neural networks to input variations to construct and tune a model-based anomaly detector, where the data-driven estimator model is provided by an autoregressive neural network.
no code implementations • 22 Dec 2020 • Shaoru Chen, Mahyar Fazlyab, Manfred Morari, George J. Pappas, Victor M. Preciado
By designing the learner and the verifier according to the analytic center cutting-plane method from convex optimization, we show that when the set of Lyapunov functions is full-dimensional in the parameter space, our method finds a Lyapunov function in a finite number of steps.
Optimization and Control
no code implementations • 10 Dec 2020 • Navid Hashemi, Justin Ruths, Mahyar Fazlyab
Abstracting neural networks with constraints they impose on their inputs and outputs can be very useful in the analysis of neural network classifiers and to derive optimization-based algorithms for certification of stability and robustness of feedback systems involving neural networks.
1 code implementation • ICLR 2021 • Priya L. Donti, Melrose Roderick, Mahyar Fazlyab, J. Zico Kolter
When designing controllers for safety-critical systems, practitioners often face a challenging tradeoff between robustness and performance.
no code implementations • L4DC 2020 • Jacob H. Seidman, Mahyar Fazlyab, Victor M. Preciado, George J. Pappas
By interpreting the min-max problem as an optimal control problem, it has recently been shown that one can exploit the compositional structure of neural networks in the optimization problem to improve the training time significantly.
no code implementations • 16 Apr 2020 • Haimin Hu, Mahyar Fazlyab, Manfred Morari, George J. Pappas
There has been an increasing interest in using neural networks in closed-loop control systems to improve performance and reduce computational costs for on-line implementation.
1 code implementation • 9 Oct 2019 • Mahyar Fazlyab, Manfred Morari, George J. Pappas
In this context, we discuss two relevant problems: (i) probabilistic safety verification, in which the goal is to find an upper bound on the probability of violating a safety specification; and (ii) confidence ellipsoid estimation, in which given a confidence ellipsoid for the input of the neural network, our goal is to compute a confidence ellipsoid for the output.
1 code implementation • NeurIPS 2019 • Mahyar Fazlyab, Alexander Robey, Hamed Hassani, Manfred Morari, George J. Pappas
The resulting SDP can be adapted to increase either the estimation accuracy (by capturing the interaction between activation functions of different layers) or scalability (by decomposition and parallel implementation).
4 code implementations • 4 Mar 2019 • Mahyar Fazlyab, Manfred Morari, George J. Pappas
Certifying the safety or robustness of neural networks against input uncertainties and adversarial attacks is an emerging challenge in the area of safe machine learning and control.