Search Results for author: Mahyar Fazlyab

Found 29 papers, 13 papers with code

Constrained Entropic Unlearning: A Primal-Dual Framework for Large Language Models

no code implementations5 Jun 2025 Taha Entesari, Arman Hatami, Rinat Khaziev, Anil Ramakrishna, Mahyar Fazlyab

We propose a new formulation of LLM unlearning as a constrained optimization problem: forgetting is enforced via a novel logit-margin flattening loss that explicitly drives the output distribution toward uniformity on a designated forget set, while retention is preserved through a hard constraint on a separate retain set.

Sequential QCQP for Bilevel Optimization with Line Search

no code implementations20 May 2025 Sina Sharifi, Erfan Yazdandoost Hamedani, Mahyar Fazlyab

Bilevel optimization involves a hierarchical structure where one problem is nested within another, leading to complex interdependencies between levels.

Bilevel Optimization

Safe Physics-Informed Machine Learning for Dynamics and Control

no code implementations17 Apr 2025 Jan Drgona, Truong X. Nghiem, Thomas Beckers, Mahyar Fazlyab, Enrique Mallada, Colin Jones, Draguna Vrabie, Steven L. Brunton, Rolf Findeisen

This tutorial paper focuses on safe physics-informed machine learning in the context of dynamics and control, providing a comprehensive overview of how to integrate physical models and safety guarantees.

Autonomous Vehicles Decision Making +2

Safe Gradient Flow for Bilevel Optimization

1 code implementation27 Jan 2025 Sina Sharifi, Nazanin Abolfazli, Erfan Yazdandoost Hamedani, Mahyar Fazlyab

Bilevel optimization is a key framework in hierarchical decision-making, where one problem is embedded within the constraints of another.

Bilevel Optimization Decision Making

Domain Adaptive Safety Filters via Deep Operator Learning

no code implementations18 Oct 2024 Lakshmideepakreddy Manda, Shaoru Chen, Mahyar Fazlyab

Learning-based approaches for constructing Control Barrier Functions (CBFs) are increasingly being explored for safety-critical control systems.

Operator learning

Compositional Curvature Bounds for Deep Neural Networks

no code implementations7 Jun 2024 Taha Entesari, Sina Sharifi, Mahyar Fazlyab

A key challenge that threatens the widespread use of neural networks in safety-critical applications is their vulnerability to adversarial attacks.

Provable Bounds on the Hessian of Neural Networks: Derivative-Preserving Reachability Analysis

no code implementations6 Jun 2024 Sina Sharifi, Mahyar Fazlyab

The resulting end-to-end abstraction locally preserves the derivative information, yielding accurate bounds on small input sets.

Gradient-Regularized Out-of-Distribution Detection

1 code implementation18 Apr 2024 Sina Sharifi, Taha Entesari, Bardia Safaei, Vishal M. Patel, Mahyar Fazlyab

In this work, we propose the idea of leveraging the information embedded in the gradient of the loss function during training to enable the network to not only learn a desired OOD score for each sample but also to exhibit similar behavior in a local neighborhood around each sample.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

Actor-Critic Physics-informed Neural Lyapunov Control

no code implementations13 Mar 2024 Jiarui Wang, Mahyar Fazlyab

Crucial to our approach is the use of Zubov's Partial Differential Equation (PDE), which precisely characterizes the true region of attraction of a given control policy.

Verification-Aided Learning of Neural Network Barrier Functions with Termination Guarantees

1 code implementation12 Mar 2024 Shaoru Chen, Lekan Molu, Mahyar Fazlyab

With a convex formulation of the barrier function synthesis, we propose to first learn an empirically well-behaved NN basis function and then apply a fine-tuning algorithm that exploits the convexity and counterexamples from the verification failure to find a valid barrier function with finite-step termination guarantees: if there exist valid barrier functions, the fine-tuning algorithm is guaranteed to find one in a finite number of iterations.

Self-Supervised Learning valid

Certified Robustness via Dynamic Margin Maximization and Improved Lipschitz Regularization

1 code implementation NeurIPS 2023 Mahyar Fazlyab, Taha Entesari, Aniket Roy, Rama Chellappa

As a result, there has been an increasing interest in developing training procedures that can directly manipulate the decision boundary in the input space.

Certified Invertibility in Neural Networks via Mixed-Integer Programming

no code implementations27 Jan 2023 Tianqi Cui, Thomas Bertalan, George J. Pappas, Manfred Morari, Ioannis G. Kevrekidis, Mahyar Fazlyab

Neural networks are known to be vulnerable to adversarial attacks, which are small, imperceptible perturbations that can significantly alter the network's output.

Network Pruning

Automated Reachability Analysis of Neural Network-Controlled Systems via Adaptive Polytopes

1 code implementation14 Dec 2022 Taha Entesari, Mahyar Fazlyab

Over-approximating the reachable sets of dynamical systems is a fundamental problem in safety verification and robust control synthesis.

ReachLipBnB: A branch-and-bound method for reachability analysis of neural autonomous systems using Lipschitz bounds

1 code implementation1 Nov 2022 Taha Entesari, Sina Sharifi, Mahyar Fazlyab

We propose a novel Branch-and-Bound method for reachability analysis of neural networks in both open-loop and closed-loop settings.

One-Shot Reachability Analysis of Neural Network Dynamical Systems

no code implementations23 Sep 2022 Shaoru Chen, Victor M. Preciado, Mahyar Fazlyab

The arising application of neural networks (NN) in robotic systems has driven the development of safety verification methods for neural network dynamical systems (NNDS).

Towards Understanding The Semidefinite Relaxations of Truncated Least-Squares in Robust Rotation Search

no code implementations18 Jul 2022 Liangzu Peng, Mahyar Fazlyab, René Vidal

To induce robustness against outliers for rotation search, prior work considers truncated least-squares (TLS), which is a non-convex optimization problem, and its semidefinite relaxation (SDR) as a tractable alternative.

Learning Region of Attraction for Nonlinear Systems

no code implementations2 Oct 2021 Shaoru Chen, Mahyar Fazlyab, Manfred Morari, George J. Pappas, Victor M. Preciado

Estimating the region of attraction (ROA) of general nonlinear autonomous systems remains a challenging problem and requires a case-by-case analysis.

DeepSplit: Scalable Verification of Deep Neural Networks via Operator Splitting

1 code implementation16 Jun 2021 Shaoru Chen, Eric Wong, J. Zico Kolter, Mahyar Fazlyab

Analyzing the worst-case performance of deep neural networks against input perturbations amounts to solving a large-scale non-convex optimization problem, for which several past works have proposed convex relaxations as a promising alternative.

image-classification Image Classification

On Centralized and Distributed Mirror Descent: Convergence Analysis Using Quadratic Constraints

no code implementations29 May 2021 Youbang Sun, Mahyar Fazlyab, Shahin Shahrampour

Our numerical experiments on strongly convex problems indicate that our framework certifies superior convergence rates compared to the existing rates for distributed GD.

Performance Bounds for Neural Network Estimators: Applications in Fault Detection

no code implementations22 Mar 2021 Navid Hashemi, Mahyar Fazlyab, Justin Ruths

We exploit recent results in quantifying the robustness of neural networks to input variations to construct and tune a model-based anomaly detector, where the data-driven estimator model is provided by an autoregressive neural network.

Fault Detection

Learning Lyapunov Functions for Hybrid Systems

no code implementations22 Dec 2020 Shaoru Chen, Mahyar Fazlyab, Manfred Morari, George J. Pappas, Victor M. Preciado

By designing the learner and the verifier according to the analytic center cutting-plane method from convex optimization, we show that when the set of Lyapunov functions is full-dimensional in the parameter space, our method finds a Lyapunov function in a finite number of steps.

Optimization and Control

Certifying Incremental Quadratic Constraints for Neural Networks via Convex Optimization

no code implementations10 Dec 2020 Navid Hashemi, Justin Ruths, Mahyar Fazlyab

Abstracting neural networks with constraints they impose on their inputs and outputs can be very useful in the analysis of neural network classifiers and to derive optimization-based algorithms for certification of stability and robustness of feedback systems involving neural networks.

Enforcing robust control guarantees within neural network policies

1 code implementation ICLR 2021 Priya L. Donti, Melrose Roderick, Mahyar Fazlyab, J. Zico Kolter

When designing controllers for safety-critical systems, practitioners often face a challenging tradeoff between robustness and performance.

Robust Deep Learning as Optimal Control: Insights and Convergence Guarantees

no code implementations L4DC 2020 Jacob H. Seidman, Mahyar Fazlyab, Victor M. Preciado, George J. Pappas

By interpreting the min-max problem as an optimal control problem, it has recently been shown that one can exploit the compositional structure of neural networks in the optimization problem to improve the training time significantly.

Deep Learning Robust classification

Reach-SDP: Reachability Analysis of Closed-Loop Systems with Neural Network Controllers via Semidefinite Programming

1 code implementation16 Apr 2020 Haimin Hu, Mahyar Fazlyab, Manfred Morari, George J. Pappas

There has been an increasing interest in using neural networks in closed-loop control systems to improve performance and reduce computational costs for on-line implementation.

Probabilistic Verification and Reachability Analysis of Neural Networks via Semidefinite Programming

1 code implementation9 Oct 2019 Mahyar Fazlyab, Manfred Morari, George J. Pappas

In this context, we discuss two relevant problems: (i) probabilistic safety verification, in which the goal is to find an upper bound on the probability of violating a safety specification; and (ii) confidence ellipsoid estimation, in which given a confidence ellipsoid for the input of the neural network, our goal is to compute a confidence ellipsoid for the output.

Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks

1 code implementation NeurIPS 2019 Mahyar Fazlyab, Alexander Robey, Hamed Hassani, Manfred Morari, George J. Pappas

The resulting SDP can be adapted to increase either the estimation accuracy (by capturing the interaction between activation functions of different layers) or scalability (by decomposition and parallel implementation).

Reinforcement Learning

Safety Verification and Robustness Analysis of Neural Networks via Quadratic Constraints and Semidefinite Programming

4 code implementations4 Mar 2019 Mahyar Fazlyab, Manfred Morari, George J. Pappas

Certifying the safety or robustness of neural networks against input uncertainties and adversarial attacks is an emerging challenge in the area of safe machine learning and control.

Computational Efficiency

Cannot find the paper you are looking for? You can Submit a new open access paper.