no code implementations • 24 Nov 2024 • Hesameddin Mohammadi, Mohammad Tinati, Stephen Tu, Mahdi Soltanolkotabi, Mihailo R. Jovanović
We demonstrate that the Schur complement to a principal eigenspace of the target matrix is governed by an autonomous system that is decoupled from the rest of the dynamics.
no code implementations • 30 Sep 2024 • Wuwei Wu, Jie Chen, Mihailo R. Jovanović, Tryphon T. Georgiou
The link between first-order optimization methods and robust control theory sheds new light into limits of algorithmic performance for such methods, and suggests a new framework where similar computational problems can be systematically studied and algorithms optimized.
no code implementations • 18 Sep 2024 • Ibrahim K. Ozaslan, Mihailo R. Jovanović
The development of finite/fixed-time stable optimization algorithms typically involves study of specific problem instances.
no code implementations • 28 Aug 2024 • Ibrahim K. Ozaslan, Panagiotis Patrinos, Mihailo R. Jovanović
We examine stability properties of primal-dual gradient flow dynamics for composite convex optimization problems with multiple, possibly nonsmooth, terms in the objective function under the generalized consensus constraint.
no code implementations • 30 Jul 2024 • Ibrahim K. Ozaslan, Mihailo R. Jovanović
We examine convergence properties of continuous-time variants of accelerated Forward-Backward (FB) and Douglas-Rachford (DR) splitting algorithms for nonsmooth composite optimization problems.
no code implementations • 31 May 2023 • Dongsheng Ding, Xiaohan Wei, Zhuoran Yang, Zhaoran Wang, Mihailo R. Jovanović
We examine online safe multi-agent reinforcement learning using constrained Markov games in which agents compete by maximizing their expected total rewards under a constraint on expected total utilities.
Multi-agent Reinforcement Learning reinforcement-learning +2
no code implementations • 24 Sep 2022 • Hesameddin Mohammadi, Meisam Razaviyayn, Mihailo R. Jovanović
We study momentum-based first-order optimization algorithms in which the iterations utilize information from the two previous steps and are subject to an additive white noise.
no code implementations • 6 Jun 2022 • Dongsheng Ding, Kaiqing Zhang, Jiali Duan, Tamer Başar, Mihailo R. Jovanović
We study sequential decision making problems aimed at maximizing the expected total reward while satisfying a constraint on the expected total utility.
no code implementations • 8 Feb 2022 • Dongsheng Ding, Chen-Yu Wei, Kaiqing Zhang, Mihailo R. Jovanović
When there is no uncertainty in the gradient evaluation, we show that our algorithm finds an $\epsilon$-Nash equilibrium with $O(1/\epsilon^2)$ iteration complexity which does not explicitly depend on the state space size.
Multi-agent Reinforcement Learning Policy Gradient Methods +1
no code implementations • 14 Mar 2021 • Hesameddin Mohammadi, Samantha Samuelson, Mihailo R. Jovanović
For convex quadratic problems, we employ tools from linear systems theory to show that transient growth arises from the presence of non-normal dynamics.
no code implementations • 25 Jan 2021 • Luca Ballotta, Mihailo R. Jovanović, Luca Schenato
We study minimum-variance feedback-control design for a networked control system with retarded dynamics, where inter-agent communication is subject to latency.
no code implementations • 9 May 2020 • Gokul Hariharan, Satish Kumar, Mihailo R. Jovanović
Modal and nonmodal analyses of fluid flows provide fundamental insight into the early stages of transition to turbulence.
Fluid Dynamics Numerical Analysis Analysis of PDEs Dynamical Systems Numerical Analysis Optimization and Control
no code implementations • 1 Mar 2020 • Dongsheng Ding, Xiaohan Wei, Zhuoran Yang, Zhaoran Wang, Mihailo R. Jovanović
To this end, we present an \underline{O}ptimistic \underline{P}rimal-\underline{D}ual Proximal Policy \underline{OP}timization (OPDOP) algorithm where the value function is estimated by combining the least-squares policy evaluation and an additional bonus term for safe exploration.
no code implementations • 26 Dec 2019 • Hesameddin Mohammadi, Armin Zare, Mahdi Soltanolkotabi, Mihailo R. Jovanović
Model-free reinforcement learning attempts to find an optimal control action for an unknown dynamical system by directly searching over the parameter space of controllers.
no code implementations • 2 Oct 2019 • Dongsheng Ding, Mihailo R. Jovanović
For a class of nonsmooth composite optimization problems with linear equality constraints, we utilize a Lyapunov-based approach to establish the global exponential stability of the primal-dual gradient flow dynamics based on the proximal augmented Lagrangian.
no code implementations • 26 Aug 2019 • Armin Zare, Tryphon T. Georgiou, Mihailo R. Jovanović
Drawing on this abundance of data, dynamical models can be constructed to reproduce structural and statistical features of turbulent flows, opening the way to the design of effective model-based flow control strategies.
no code implementations • 23 Aug 2019 • Sepideh Hassan-Moghaddam, Mihailo R. Jovanović
In our analysis, we use the fact that these algorithms can be interpreted as variable-metric gradient methods on the suitable envelopes and exploit structural properties of the nonlinear terms that arise from the gradient of the smooth part of the objective function and the proximal operator associated with the nonsmooth regularizer.
no code implementations • 7 Aug 2019 • Dongsheng Ding, Xiaohan Wei, Zhuoran Yang, Zhaoran Wang, Mihailo R. Jovanović
We study the policy evaluation problem in multi-agent reinforcement learning where a group of agents, with jointly observed states and private local actions and rewards, collaborate to learn the value function of a given policy via local computation and communication over a connected undirected network.
Multi-agent Reinforcement Learning Reinforcement Learning +1
no code implementations • 27 May 2019 • Hesameddin Mohammadi, Meisam Razaviyayn, Mihailo R. Jovanović
We study the robustness of accelerated first-order algorithms to stochastic uncertainties in gradient evaluation.
no code implementations • 4 Jul 2018 • Armin Zare, Hesameddin Mohammadi, Neil K. Dhingra, Tryphon T. Georgiou, Mihailo R. Jovanović
Several problems in modeling and control of stochastically-driven dynamical systems can be cast as regularized semi-definite programs.
no code implementations • 5 Sep 2017 • Neil K. Dhingra, Sei Zhen Khong, Mihailo R. Jovanović
We develop a second order primal-dual method for optimization problems in which the objective function is given by the sum of a strongly convex twice differentiable term and a possibly nondifferentiable convex regularizer.