no code implementations • 23 Mar 2024 • Navid Hashemi, Bardh Hoxha, Danil Prokhorov, Georgios Fainekos, Jyotirmoy Deshmukh
We show how this learning problem is similar to training recurrent neural networks (RNNs), where the number of recurrent units is proportional to the temporal horizon of the agent's task objectives.
no code implementations • 17 Sep 2023 • Navid Hashemi, Xin Qin, Lars Lindemann, Jyotirmoy V. Deshmukh
We consider data-driven reachability analysis of discrete-time stochastic dynamical systems using conformal inference.
no code implementations • 12 Aug 2023 • Xin Qin, Navid Hashemi, Lars Lindemann, Jyotirmoy V. Deshmukh
Ultimately, conformance can capture distance between design models and their real implementations and thus aid in robust system design.
no code implementations • 5 Apr 2023 • Navid Hashemi, Justin Ruths, Jyotirmoy V. Deshmukh
The problem addressed by this paper is the following: Suppose we obtain an optimal trajectory by solving a control problem in the training environment, how do we ensure that the real-world system trajectory tracks this optimal trajectory with minimal amount of error in a deployment environment.
no code implementations • 7 Mar 2023 • Navid Hashemi, Bardh Hoxha, Tomoya Yamaguchi, Danil Prokhorov, Geogios Fainekos, Jyotirmoy Deshmukh
In this paper, we present a model for the verification of Neural Network (NN) controllers for general STL specifications using a custom neural architecture where we map an STL formula into a feed-forward neural network with ReLU activation.
no code implementations • 14 Oct 2022 • Navid Hashemi, Xin Qin, Jyotirmoy V. Deshmukh, Georgios Fainekos, Bardh Hoxha, Danil Prokhorov, Tomoya Yamaguchi
In this paper, we consider the problem of synthesizing a controller in the presence of uncertainty such that the resulting closed-loop system satisfies certain hard constraints while optimizing certain (soft) performance objectives.
no code implementations • 22 Mar 2021 • Navid Hashemi, Mahyar Fazlyab, Justin Ruths
We exploit recent results in quantifying the robustness of neural networks to input variations to construct and tune a model-based anomaly detector, where the data-driven estimator model is provided by an autoregressive neural network.
no code implementations • 10 Dec 2020 • Navid Hashemi, Justin Ruths, Mahyar Fazlyab
Abstracting neural networks with constraints they impose on their inputs and outputs can be very useful in the analysis of neural network classifiers and to derive optimization-based algorithms for certification of stability and robustness of feedback systems involving neural networks.
no code implementations • 15 Jun 2020 • Navid Hashemi, Justin Ruths
General results on convex bodies are reviewed and used to derive an exact closed-form parametric formula for the boundary of the geometric (Minkowski) sum of $k$ ellipsoids in $n$-dimensional Euclidean space.
no code implementations • 15 Jun 2020 • Babak Sakhaei, Mohammad Durali, Navid Hashemi
The procedure shows a good ability of vibration path ranking in a vehicle and is an effective tool to diagnose the vibration problem inside the vehicle.