Search Results for author: Mahyar Fazlyab

Found 22 papers, 10 papers with code

Safety Verification and Robustness Analysis of Neural Networks via Quadratic Constraints and Semidefinite Programming

4 code implementations4 Mar 2019 Mahyar Fazlyab, Manfred Morari, George J. Pappas

Certifying the safety or robustness of neural networks against input uncertainties and adversarial attacks is an emerging challenge in the area of safe machine learning and control.

Computational Efficiency

Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks

1 code implementation NeurIPS 2019 Mahyar Fazlyab, Alexander Robey, Hamed Hassani, Manfred Morari, George J. Pappas

The resulting SDP can be adapted to increase either the estimation accuracy (by capturing the interaction between activation functions of different layers) or scalability (by decomposition and parallel implementation).

Probabilistic Verification and Reachability Analysis of Neural Networks via Semidefinite Programming

1 code implementation9 Oct 2019 Mahyar Fazlyab, Manfred Morari, George J. Pappas

In this context, we discuss two relevant problems: (i) probabilistic safety verification, in which the goal is to find an upper bound on the probability of violating a safety specification; and (ii) confidence ellipsoid estimation, in which given a confidence ellipsoid for the input of the neural network, our goal is to compute a confidence ellipsoid for the output.

Reach-SDP: Reachability Analysis of Closed-Loop Systems with Neural Network Controllers via Semidefinite Programming

1 code implementation16 Apr 2020 Haimin Hu, Mahyar Fazlyab, Manfred Morari, George J. Pappas

There has been an increasing interest in using neural networks in closed-loop control systems to improve performance and reduce computational costs for on-line implementation.

Robust Deep Learning as Optimal Control: Insights and Convergence Guarantees

no code implementations L4DC 2020 Jacob H. Seidman, Mahyar Fazlyab, Victor M. Preciado, George J. Pappas

By interpreting the min-max problem as an optimal control problem, it has recently been shown that one can exploit the compositional structure of neural networks in the optimization problem to improve the training time significantly.

Robust classification

Enforcing robust control guarantees within neural network policies

1 code implementation ICLR 2021 Priya L. Donti, Melrose Roderick, Mahyar Fazlyab, J. Zico Kolter

When designing controllers for safety-critical systems, practitioners often face a challenging tradeoff between robustness and performance.

Certifying Incremental Quadratic Constraints for Neural Networks via Convex Optimization

no code implementations10 Dec 2020 Navid Hashemi, Justin Ruths, Mahyar Fazlyab

Abstracting neural networks with constraints they impose on their inputs and outputs can be very useful in the analysis of neural network classifiers and to derive optimization-based algorithms for certification of stability and robustness of feedback systems involving neural networks.

Learning Lyapunov Functions for Hybrid Systems

no code implementations22 Dec 2020 Shaoru Chen, Mahyar Fazlyab, Manfred Morari, George J. Pappas, Victor M. Preciado

By designing the learner and the verifier according to the analytic center cutting-plane method from convex optimization, we show that when the set of Lyapunov functions is full-dimensional in the parameter space, our method finds a Lyapunov function in a finite number of steps.

Optimization and Control

Performance Bounds for Neural Network Estimators: Applications in Fault Detection

no code implementations22 Mar 2021 Navid Hashemi, Mahyar Fazlyab, Justin Ruths

We exploit recent results in quantifying the robustness of neural networks to input variations to construct and tune a model-based anomaly detector, where the data-driven estimator model is provided by an autoregressive neural network.

Fault Detection

On Centralized and Distributed Mirror Descent: Convergence Analysis Using Quadratic Constraints

no code implementations29 May 2021 Youbang Sun, Mahyar Fazlyab, Shahin Shahrampour

Our numerical experiments on strongly convex problems indicate that our framework certifies superior convergence rates compared to the existing rates for distributed GD.

DeepSplit: Scalable Verification of Deep Neural Networks via Operator Splitting

1 code implementation16 Jun 2021 Shaoru Chen, Eric Wong, J. Zico Kolter, Mahyar Fazlyab

Analyzing the worst-case performance of deep neural networks against input perturbations amounts to solving a large-scale non-convex optimization problem, for which several past works have proposed convex relaxations as a promising alternative.

Image Classification

Learning Region of Attraction for Nonlinear Systems

no code implementations2 Oct 2021 Shaoru Chen, Mahyar Fazlyab, Manfred Morari, George J. Pappas, Victor M. Preciado

Estimating the region of attraction (ROA) of general nonlinear autonomous systems remains a challenging problem and requires a case-by-case analysis.

Towards Understanding The Semidefinite Relaxations of Truncated Least-Squares in Robust Rotation Search

no code implementations18 Jul 2022 Liangzu Peng, Mahyar Fazlyab, René Vidal

To induce robustness against outliers for rotation search, prior work considers truncated least-squares (TLS), which is a non-convex optimization problem, and its semidefinite relaxation (SDR) as a tractable alternative.

One-Shot Reachability Analysis of Neural Network Dynamical Systems

no code implementations23 Sep 2022 Shaoru Chen, Victor M. Preciado, Mahyar Fazlyab

The arising application of neural networks (NN) in robotic systems has driven the development of safety verification methods for neural network dynamical systems (NNDS).

ReachLipBnB: A branch-and-bound method for reachability analysis of neural autonomous systems using Lipschitz bounds

1 code implementation1 Nov 2022 Taha Entesari, Sina Sharifi, Mahyar Fazlyab

We propose a novel Branch-and-Bound method for reachability analysis of neural networks in both open-loop and closed-loop settings.

Automated Reachability Analysis of Neural Network-Controlled Systems via Adaptive Polytopes

1 code implementation14 Dec 2022 Taha Entesari, Mahyar Fazlyab

Over-approximating the reachable sets of dynamical systems is a fundamental problem in safety verification and robust control synthesis.

Certified Invertibility in Neural Networks via Mixed-Integer Programming

no code implementations27 Jan 2023 Tianqi Cui, Thomas Bertalan, George J. Pappas, Manfred Morari, Ioannis G. Kevrekidis, Mahyar Fazlyab

Neural networks are known to be vulnerable to adversarial attacks, which are small, imperceptible perturbations that can significantly alter the network's output.

Network Pruning

Certified Robustness via Dynamic Margin Maximization and Improved Lipschitz Regularization

1 code implementation NeurIPS 2023 Mahyar Fazlyab, Taha Entesari, Aniket Roy, Rama Chellappa

As a result, there has been an increasing interest in developing training procedures that can directly manipulate the decision boundary in the input space.

Learning Performance-Oriented Control Barrier Functions Under Complex Safety Constraints and Limited Actuation

1 code implementation11 Jan 2024 Shaoru Chen, Mahyar Fazlyab

Control Barrier Functions (CBFs) provide an elegant framework for designing safety filters for nonlinear control systems by constraining their trajectories to an invariant subset of a prespecified safe set.

Self-Supervised Learning

Verification-Aided Learning of Neural Network Barrier Functions with Termination Guarantees

no code implementations12 Mar 2024 Shaoru Chen, Lekan Molu, Mahyar Fazlyab

With a convex formulation of the barrier function synthesis, we propose to first learn an empirically well-behaved NN basis function and then apply a fine-tuning algorithm that exploits the convexity and counterexamples from the verification failure to find a valid barrier function with finite-step termination guarantees: if there exist valid barrier functions, the fine-tuning algorithm is guaranteed to find one in a finite number of iterations.

Self-Supervised Learning valid

Actor-Critic Physics-informed Neural Lyapunov Control

no code implementations13 Mar 2024 Jiarui Wang, Mahyar Fazlyab

Crucial to our approach is the use of Zubov's Partial Differential Equation (PDE), which precisely characterizes the true region of attraction of a given control policy.

Gradient-Regularized Out-of-Distribution Detection

no code implementations18 Apr 2024 Sina Sharifi, Taha Entesari, Bardia Safaei, Vishal M. Patel, Mahyar Fazlyab

In this work, we propose the idea of leveraging the information embedded in the gradient of the loss function during training to enable the network to not only learn a desired OOD score for each sample but also to exhibit similar behavior in a local neighborhood around each sample.

Cannot find the paper you are looking for? You can Submit a new open access paper.