Search Results for author: Yuanyuan Shi

Found 37 papers, 18 papers with code

Ventilation and Temperature Control for Energy-efficient and Healthy Buildings: A Differentiable PDE Approach

no code implementations13 Mar 2024 Yuexin Bian, Xiaohan Fu, Rajesh K. Gupta, Yuanyuan Shi

In this paper, we introduce a novel framework for building learning and control, focusing on ventilation and thermal management to enhance energy efficiency.

Adaptive Neural-Operator Backstepping Control of a Benchmark Hyperbolic PDE

1 code implementation15 Jan 2024 Maxence Lamarque, Luke Bhan, Yuanyuan Shi, Miroslav Krstic

This requires an adaptive approach to PDE control, i. e., an estimation of the plant coefficients conducted concurrently with control, where a separate PDE for the gain kernel must be solved at each timestep upon the update in the plant coefficient function estimate.

Moving-Horizon Estimators for Hyperbolic and Parabolic PDEs in 1-D

no code implementations4 Jan 2024 Luke Bhan, Yuanyuan Shi, Iasson Karafyllis, Miroslav Krstic, James B. Rawlings

In the paper we provide explicit formulae for MHEs for both hyperbolic and parabolic PDEs, as well as simulation results that illustrate theoretically guaranteed convergence of the MHEs.

Deriving Loss Function for Value-oriented Renewable Energy Forecasting

no code implementations1 Oct 2023 Yufan Zhang, Honglin Wen, Yuexin Bian, Yuanyuan Shi

By integrating it into the upper-level objective for minimizing expected operation cost, we convert the bilevel problem to a single-level one and derive the loss function for training the model.

Value-oriented Renewable Energy Forecasting for Coordinated Energy Dispatch Problems at Two Stages

no code implementations2 Sep 2023 Yufan Zhang, Mengshuo Jia, Honglin Wen, Yuanyuan Shi

To this end, we formulate the forecast model parameter estimation as a bilevel program at the training phase, where the lower level solves the day-ahead and real-time energy dispatch problems, with the forecasts as parameters; the optimal solutions of the lower level are then returned to the upper level, which optimizes the model parameters given the contextual information and minimizes the expected operation cost of the two stages.

Online learning for robust voltage control under uncertain grid topology

1 code implementation29 Jun 2023 Christopher Yeh, Jing Yu, Yuanyuan Shi, Adam Wierman

In this work, we combine a nested convex body chasing algorithm with a robust predictive controller to achieve provably finite-time convergence to safe voltage limits in the online setting where there is uncertainty in both the network topology as well as load and generation variations.

Predicting Strategic Energy Storage Behaviors

1 code implementation20 Jun 2023 Yuexin Bian, Ningkun Zheng, Yang Zheng, Bolun Xu, Yuanyuan Shi

Energy storage are strategic participants in electricity markets to arbitrage price differences.

Leveraging Predictions in Power System Frequency Control: an Adaptive Approach

no code implementations20 May 2023 Wenqi Cui, Guanya Shi, Yuanyuan Shi, Baosen Zhang

Ensuring the frequency stability of electric grids with increasing renewable resources is a key problem in power system operations.

Load Forecasting

Optimal Vehicle Charging in Bilevel Power-Traffic Networks via Charging Demand Function

no code implementations22 Apr 2023 Yufan Zhang, Sujit Dey, Yuanyuan Shi

Specifically, the power network determines the charging price at various locations, while EVs on the traffic network optimize the charging power given the price, acting as price-takers.

Decision Making

Bridging Transient and Steady-State Performance in Voltage Control: A Reinforcement Learning Approach with Safe Gradient Flow

no code implementations20 Mar 2023 Jie Feng, Wenqi Cui, Jorge Cortés, Yuanyuan Shi

Deep reinforcement learning approaches are becoming appealing for the design of nonlinear controllers for voltage control problems, but the lack of stability guarantees hinders their deployment in real-world scenarios.

Neural Operators of Backstepping Controller and Observer Gain Functions for Reaction-Diffusion PDEs

1 code implementation18 Mar 2023 Miroslav Krstic, Luke Bhan, Yuanyuan Shi

The designs of gains for controllers and observers for PDEs, such as PDE backstepping, are mappings of system model functions into gain functions.

Operator learning Scheduling

Neural Operators for Bypassing Gain and Control Computations in PDE Backstepping

1 code implementation28 Feb 2023 Luke Bhan, Yuanyuan Shi, Miroslav Krstic

While, in the existing PDE backstepping, finding the gain kernel requires (one offline) solution to an integral equation, the neural operator (NO) approach we propose learns the mapping from the functional coefficients of the plant PDE to the kernel function by employing a sufficiently high number of offline numerical solutions to the kernel integral equation, for a large enough number of the PDE model's different functional coefficients.

Scheduling

Machine Learning Accelerated PDE Backstepping Observers

no code implementations28 Nov 2022 Yuanyuan Shi, Zongyi Li, Huan Yu, Drew Steeves, Anima Anandkumar, Miroslav Krstic

State estimation is important for a variety of tasks, from forecasting to substituting for unmeasured states in feedback controllers.

Computational Efficiency

BEAR: Physics-Principled Building Environment for Control and Reinforcement Learning

1 code implementation27 Nov 2022 Chi Zhang, Yuanyuan Shi, Yize Chen

Recent advancements in reinforcement learning algorithms have opened doors for researchers to operate and optimize building energy management systems autonomously.

energy management Management +3

Energy Storage Price Arbitrage via Opportunity Value Function Prediction

no code implementations14 Nov 2022 Ningkun Zheng, Xiaoxiang Liu, Bolun Xu, Yuanyuan Shi

This paper proposes a novel energy storage price arbitrage algorithm combining supervised learning with dynamic programming.

FI-ODE: Certifiably Robust Forward Invariance in Neural ODEs

1 code implementation30 Oct 2022 Yujia Huang, Ivan Dario Jimenez Rodriguez, huan zhang, Yuanyuan Shi, Yisong Yue

Forward invariance is a long-studied property in control theory that is used to certify that a dynamical system stays within some pre-specified set of states for all time, and also admits robustness guarantees (e. g., the certificate holds under perturbations).

Adversarial Robustness Continuous Control +1

Carbon-Aware EV Charging

1 code implementation26 Sep 2022 Kai-Wen Cheng, Yuexin Bian, Yuanyuan Shi, Yize Chen

This paper examines the problem of optimizing the charging pattern of electric vehicles (EV) by taking real-time electricity grid carbon intensity into consideration.

Total Energy

Stability Constrained Reinforcement Learning for Decentralized Real-Time Voltage Control

1 code implementation16 Sep 2022 Jie Feng, Yuanyuan Shi, Guannan Qu, Steven H. Low, Anima Anandkumar, Adam Wierman

In this paper, we propose a stability-constrained reinforcement learning (RL) method for real-time voltage control, that guarantees system stability both during policy learning and deployment of the learned policy.

reinforcement-learning Reinforcement Learning (RL)

Robust Online Voltage Control with an Unknown Grid Topology

1 code implementation29 Jun 2022 Christopher Yeh, Jing Yu, Yuanyuan Shi, Adam Wierman

Voltage control generally requires accurate information about the grid's topology in order to guarantee network stability.

KCRL: Krasovskii-Constrained Reinforcement Learning with Guaranteed Stability in Nonlinear Dynamical Systems

no code implementations3 Jun 2022 Sahin Lale, Yuanyuan Shi, Guannan Qu, Kamyar Azizzadenesheli, Adam Wierman, Anima Anandkumar

However, current reinforcement learning (RL) methods lack stabilization guarantees, which limits their applicability for the control of safety-critical systems.

reinforcement-learning Reinforcement Learning (RL)

Structured Neural-PI Control for Networked Systems: Stability and Steady-State Optimality Guarantees

1 code implementation1 Jun 2022 Wenqi Cui, Yan Jiang, Baosen Zhang, Yuanyuan Shi

We explicitly characterize the stability conditions and engineer neural networks that satisfy them by design.

CEM-GD: Cross-Entropy Method with Gradient Descent Planner for Model-Based Reinforcement Learning

1 code implementation14 Dec 2021 Kevin Huang, Sahin Lale, Ugo Rosolia, Yuanyuan Shi, Anima Anandkumar

It then uses the top trajectories as initialization for gradient descent and applies gradient updates to each of these trajectories to find the optimal action sequence.

Continuous Control Model-based Reinforcement Learning +1

SAVER: Safe Learning-Based Controller for Real-Time Voltage Regulation

no code implementations30 Nov 2021 Yize Chen, Yuanyuan Shi, Daniel Arnold, Sean Peisert

Fast and safe voltage regulation algorithms can serve as fundamental schemes for achieving a high level of renewable penetration in the modern distribution power grids.

Polymatrix Competitive Gradient Descent

no code implementations16 Nov 2021 Jeffrey Ma, Alistair Letcher, Florian Schäfer, Yuanyuan Shi, Anima Anandkumar

In this work we propose polymatrix competitive gradient descent (PCGD) as a method for solving general sum competitive optimization involving arbitrary numbers of agents.

Multi-agent Reinforcement Learning

Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds

1 code implementation NeurIPS 2021 Yujia Huang, huan zhang, Yuanyuan Shi, J Zico Kolter, Anima Anandkumar

Certified robustness is a desirable property for deep neural networks in safety-critical applications, and popular training algorithms can certify robustness of a neural network by computing a global bound on its Lipschitz constant.

Improving Robustness of Reinforcement Learning for Power System Control with Adversarial Training

no code implementations18 Oct 2021 Alexander Pan, Yongkyun Lee, huan zhang, Yize Chen, Yuanyuan Shi

Due to the proliferation of renewable energy and its intrinsic intermittency and stochasticity, current power systems face severe operational challenges.

Decision Making reinforcement-learning +1

Understanding the Safety Requirements for Learning-based Power Systems Operations

1 code implementation11 Oct 2021 Yize Chen, Daniel Arnold, Yuanyuan Shi, Sean Peisert

Case studies performed on both voltage regulation and topology control tasks demonstrated the potential vulnerabilities of the standard reinforcement learning algorithms, and possible measures of machine learning robustness and security are discussed for power systems operation tasks.

BIG-bench Machine Learning Decision Making +4

Stability Constrained Reinforcement Learning for Real-Time Voltage Control

no code implementations30 Sep 2021 Yuanyuan Shi, Guannan Qu, Steven Low, Anima Anandkumar, Adam Wierman

Deep reinforcement learning (RL) has been recognized as a promising tool to address the challenges in real-time control of power systems.

reinforcement-learning Reinforcement Learning (RL)

End-to-End Demand Response Model Identification and Baseline Estimation with Deep Learning

no code implementations2 Sep 2021 Yuanyuan Shi, Bolun Xu

This paper proposes a novel end-to-end deep learning framework that simultaneously identifies demand baselines and the incentive-based agent demand response model, from the net demand measurements and incentive signals.

Decision Making

Stable Online Control of Linear Time-Varying Systems

no code implementations29 Apr 2021 Guannan Qu, Yuanyuan Shi, Sahin Lale, Anima Anandkumar, Adam Wierman

In this work, we propose an efficient online control algorithm, COvariance Constrained Online Linear Quadratic (COCO-LQ) control, that guarantees input-to-state stability for a large class of LTV systems while also minimizing the control cost.

Multi-Agent Reinforcement Learning in Cournot Games

no code implementations14 Sep 2020 Yuanyuan Shi, Baosen Zhang

This is the first result (to the best of our knowledge) on the convergence property of learning algorithms with continuous action spaces that do not fall in the no-regret class.

Continuous Control Multi-agent Reinforcement Learning +2

Safe Reinforcement Learning of Control-Affine Systems with Vertex Networks

1 code implementation20 Mar 2020 Liyuan Zheng, Yuanyuan Shi, Lillian J. Ratliff, Baosen Zhang

This paper focuses on finding reinforcement learning policies for control systems with hard state and action constraints.

reinforcement-learning Reinforcement Learning (RL) +1

Robust Reinforcement Learning for Continuous Control with Model Misspecification

no code implementations ICLR 2020 Daniel J. Mankowitz, Nir Levine, Rae Jeong, Yuanyuan Shi, Jackie Kay, Abbas Abdolmaleki, Jost Tobias Springenberg, Timothy Mann, Todd Hester, Martin Riedmiller

We provide a framework for incorporating robustness -- to perturbations in the transition dynamics which we refer to as model misspecification -- into continuous control Reinforcement Learning (RL) algorithms.

Continuous Control reinforcement-learning +1

Product Review Summarization by Exploiting Phrase Properties

no code implementations COLING 2016 Naitong Yu, Minlie Huang, Yuanyuan Shi, Xiaoyan Zhu

The main idea of our method is to leverage phrase properties to choose a subset of optimal phrases for generating the final summary.

Abstractive Text Summarization Descriptive +2

Cannot find the paper you are looking for? You can Submit a new open access paper.