Search Results for author: Geir Dullerud

Found 8 papers, 1 papers with code

Capabilities of Large Language Models in Control Engineering: A Benchmark Study on GPT-4, Claude 3 Opus, and Gemini 1.0 Ultra

no code implementations4 Apr 2024 Darioush Kevian, Usman Syed, Xingang Guo, Aaron Havens, Geir Dullerud, Peter Seiler, Lianhui Qin, Bin Hu

In this paper, we explore the capabilities of state-of-the-art large language models (LLMs) such as GPT-4, Claude 3 Opus, and Gemini 1. 0 Ultra in solving undergraduate-level control problems.

Model-Free $μ$-Synthesis: A Nonsmooth Optimization Perspective

no code implementations18 Feb 2024 Darioush Keivan, Xingang Guo, Peter Seiler, Geir Dullerud, Bin Hu

Built upon such a policy optimization persepctive, our paper extends these subgradient-based search methods to a model-free setting.

Revisiting PGD Attacks for Stability Analysis of Large-Scale Nonlinear Systems and Perception-Based Control

no code implementations3 Jan 2022 Aaron Havens, Darioush Keivan, Peter Seiler, Geir Dullerud, Bin Hu

We show that the ROA analysis can be approximated as a constrained maximization problem whose goal is to find the worst-case initial condition which shifts the terminal state the most.

Model-Free $μ$ Synthesis via Adversarial Reinforcement Learning

no code implementations30 Nov 2021 Darioush Keivan, Aaron Havens, Peter Seiler, Geir Dullerud, Bin Hu

We build a connection between robust adversarial RL and $\mu$ synthesis, and develop a model-free version of the well-known $DK$-iteration for solving state-feedback $\mu$ synthesis with static $D$-scaling.

reinforcement-learning Reinforcement Learning (RL)

Policy Optimization for Markovian Jump Linear Quadratic Control: Gradient-Based Methods and Global Convergence

no code implementations24 Nov 2020 Joao Paulo Jansch-Porto, Bin Hu, Geir Dullerud

In this paper, we investigate the global convergence of gradient-based policy optimization methods for quadratic optimal control of discrete-time Markovian jump linear systems (MJLS).

Policy Gradient Methods

Convergence Guarantees of Policy Optimization Methods for Markovian Jump Linear Systems

no code implementations10 Feb 2020 Joao Paulo Jansch-Porto, Bin Hu, Geir Dullerud

Recently, policy optimization for control purposes has received renewed attention due to the increasing interest in reinforcement learning.

Verification and Parameter Synthesis for Stochastic Systems using Optimistic Optimization

no code implementations4 Nov 2019 Negin Musavi, Dawei Sun, Sayan Mitra, Geir Dullerud, Sanjay Shakkottai

As a consequence, we obtain theoretical regret bounds on sample efficiency of our solution that depends on key problem parameters like smoothness, near-optimality dimension, and batch size.

Cannot find the paper you are looking for? You can Submit a new open access paper.