Search Results for author: Daniel E. Quevedo

Found 19 papers, 0 papers with code

Extending direct data-driven predictive control towards systems with finite control sets

no code implementations3 Apr 2024 Manuel Klädtke, Moritz Schulze Darup, Daniel E. Quevedo

We test the reformulation on a popular electrical drive example and compare the computation times of sphere decoding FCS-DPC with an enumeration-based and a MIQP method.

Model Predictive Control

Remote State Estimation with Privacy Against Active Eavesdroppers

no code implementations17 Aug 2023 Matthew Crimson, Justin M. Kennedy, Daniel E. Quevedo

To maintain state confidentiality, we propose an encoding scheme that is activated on the detection of an eavesdropper.

Ensemble Nonlinear Model Predictive Control for Residential Solar-Battery Energy Management

no code implementations18 Mar 2023 Yang Li, D. Mahinda Vilathgamuwa, Daniel E. Quevedo, Chih Feng Lee, Changfu Zou

In a dynamic distribution market environment, residential prosumers with solar power generation and battery energy storage devices can flexibly interact with the power grid via power exchange.

energy management Management +1

Structure-Enhanced DRL for Optimal Transmission Scheduling

no code implementations24 Dec 2022 Jiazheng Chen, Wanchun Liu, Daniel E. Quevedo, Saeed R. Khosravirad, Yonghui Li, Branka Vucetic

In addition, we show that the derived structural properties exist in a wide range of dynamic scheduling problems that go beyond remote state estimation.

Scheduling

Innovation-Based Remote State Estimation Secrecy with no Acknowledgments

no code implementations16 Dec 2022 Justin M. Kennedy, Jason J. Ford, Daniel E. Quevedo, Falko Dressler

Aiming to achieve a reliable state estimate for a legitimate estimator while ensuring secrecy, we propose a secrecy encoding scheme without the need for packet receipt acknowledgments.

Scheduling

Remote State Estimation with Privacy Against Eavesdroppers

no code implementations24 Nov 2022 Matthew Crimson, Justin M. Kennedy, Daniel E. Quevedo

A remote legitimate user estimates the state of a linear plant from the state information received from a sensor via an insecure and unreliable network.

Deep Learning for Wireless Networked Systems: a joint Estimation-Control-Scheduling Approach

no code implementations3 Oct 2022 Zihuai Zhao, Wanchun Liu, Daniel E. Quevedo, Yonghui Li, Branka Vucetic

Wireless networked control system (WNCS) connecting sensors, controllers, and actuators via wireless communications is a key enabling technology for highly scalable and low-cost deployment of control systems in the Industry 4. 0 era.

Scheduling

Bayesian Quickest Change Detection of an Intruder in Acknowledgments for Private Remote State Estimation

no code implementations18 Jul 2022 Justin M. Kennedy, Jason J. Ford, Daniel E. Quevedo

For geographically separated cyber-physical systems, state estimation at a remote monitoring or control site is important to ensure stability and reliability of the system.

Change Detection

Stability Enforced Bandit Algorithms for Channel Selection in Remote State Estimation of Gauss-Markov Processes

no code implementations20 May 2022 Alex S. Leong, Daniel E. Quevedo, Wanchun Liu

In this paper we consider the problem of remote state estimation of a Gauss-Markov process, where a sensor can, at each discrete time instant, transmit on one out of M different communication channels.

Multi-Armed Bandits

Stability Conditions for Remote State Estimation of Multiple Systems over Semi-Markov Fading Channels

no code implementations31 Mar 2022 Wanchun Liu, Daniel E. Quevedo, Branka Vucetic, Yonghui Li

In particular, we show that, from a system stability perspective, fast fading channels may be preferable to slow fading ones.

Transmission power policies for energy-efficient wireless control of nonlinear system

no code implementations18 Nov 2021 Vineeth S. Varma, Romain Postoyan, Daniel E. Quevedo, Irinel-Constantin Morarescu

We present a controller and transmission policy design procedure for nonlinear wireless networked control systems.

Deep Reinforcement Learning for Wireless Scheduling in Distributed Networked Control

no code implementations26 Sep 2021 Wanchun Liu, Kang Huang, Daniel E. Quevedo, Branka Vucetic, Yonghui Li

We consider a joint uplink and downlink scheduling problem of a fully distributed wireless networked control system (WNCS) with a limited number of frequency channels.

reinforcement-learning Reinforcement Learning (RL) +1

Stability Conditions for Remote State Estimation of Multiple Systems over Multiple Markov Fading Channels

no code implementations9 Apr 2021 Wanchun Liu, Daniel E. Quevedo, Karl H. Johansson, Branka Vucetic, Yonghui Li

We investigate the stability conditions for remote state estimation of multiple linear time-invariant (LTI) systems over multiple wireless time-varying communication channels.

Scheduling

A Jointly Optimal Design of Control and Scheduling in Networked Systems under Denial-of-Service Attacks

no code implementations10 Mar 2021 Jingyi Lu, Daniel E. Quevedo

We consider the joint design of control and scheduling under stochastic Denial-of-Service (DoS) attacks in the context of networked control systems.

Q-Learning Scheduling

Remote State Estimation with Smart Sensors over Markov Fading Channels

no code implementations16 May 2020 Wanchun Liu, Daniel E. Quevedo, Yonghui Li, Karl Henrik Johansson, Branka Vucetic

A smart sensor forwards its local state estimate to a remote estimator over a time-correlated $M$-state Markov fading channel, where the packet drop probability is time-varying and depends on the current fading channel state.

DeepCAS: A Deep Reinforcement Learning Algorithm for Control-Aware Scheduling

no code implementations8 Mar 2018 Burak Demirel, Arunselvan Ramaswamy, Daniel E. Quevedo, Holger Karl

The main contribution of this paper is to develop a deep reinforcement learning-based \emph{control-aware} scheduling (\textsc{DeepCAS}) algorithm to tackle these issues.

reinforcement-learning Reinforcement Learning (RL) +1

Cannot find the paper you are looking for? You can Submit a new open access paper.