Search Results for author: Daniel Arnold

Found 10 papers, 1 papers with code

Augmented Digital Twin for Identification of Most Critical Cyberattacks in Industrial Systems

no code implementations7 Jun 2023 Bruno Paes Leao, Jagannadh Vempati, Siddharth Bhela, Tobias Ahlgrim, Daniel Arnold

The resulting Augmented Digital Twin (ADT) is then employed in a sequential decision-making optimization formulated to yield the most critical attack scenarios as measured by the defined KPI.

Decision Making

Constrained Reinforcement Learning for Predictive Control in Real-Time Stochastic Dynamic Optimal Power Flow

no code implementations21 Feb 2023 Tong Wu, Anna Scaglione, Daniel Arnold

This paper presents a novel primal-dual approach for learning optimal constrained DRL policies for dynamic optimal power flow problems, with the aim of controlling power generations and battery outputs.

reinforcement-learning Reinforcement Learning (RL)

Complex-Value Spatio-temporal Graph Convolutional Neural Networks and its Applications to Electric Power Systems AI

no code implementations17 Aug 2022 Tong Wu, Anna Scaglione, Daniel Arnold

The effective representation, precessing, analysis, and visualization of large-scale structured data over graphs are gaining a lot of attention.

Cyber Attack Detection

Spatio-Temporal Graph Convolutional Neural Networks for Physics-Aware Grid Learning Algorithms

no code implementations31 Mar 2022 Tong Wu, Ignacio Losada Carreno, Anna Scaglione, Daniel Arnold

This paper proposes a model-free Volt-VAR control (VVC) algorithm via the spatio-temporal graph ConvNet-based deep reinforcement learning (STGCN-DRL) framework, whose goal is to control smart inverters in an unbalanced distribution system.

reinforcement-learning Reinforcement Learning (RL)

Adam-based Augmented Random Search for Control Policies for Distributed Energy Resource Cyber Attack Mitigation

no code implementations27 Jan 2022 Daniel Arnold, Sy-Toan Ngo, Ciaran Roberts, Yize Chen, Anna Scaglione, Sean Peisert

Volt-VAR and Volt-Watt control functions are mechanisms that are included in distributed energy resource (DER) power electronic inverters to mitigate excessively high or low voltages in distribution systems.

SAVER: Safe Learning-Based Controller for Real-Time Voltage Regulation

no code implementations30 Nov 2021 Yize Chen, Yuanyuan Shi, Daniel Arnold, Sean Peisert

Fast and safe voltage regulation algorithms can serve as fundamental schemes for achieving a high level of renewable penetration in the modern distribution power grids.

Understanding the Safety Requirements for Learning-based Power Systems Operations

1 code implementation11 Oct 2021 Yize Chen, Daniel Arnold, Yuanyuan Shi, Sean Peisert

Case studies performed on both voltage regulation and topology control tasks demonstrated the potential vulnerabilities of the standard reinforcement learning algorithms, and possible measures of machine learning robustness and security are discussed for power systems operation tasks.

BIG-bench Machine Learning Decision Making +4

Regression-based Inverter Control for Decentralized Optimal Power Flow and Voltage Regulation

no code implementations20 Feb 2019 Oscar Sondermeijer, Roel Dobbe, Daniel Arnold, Claire Tomlin, Tamás Keviczky

Electronic power inverters are capable of quickly delivering reactive power to maintain customer voltages within operating tolerances and to reduce system losses in distribution grids.

regression

Towards Distributed Energy Services: Decentralizing Optimal Power Flow with Machine Learning

no code implementations14 Jun 2018 Roel Dobbe, Oscar Sondermeijer, David Fridovich-Keil, Daniel Arnold, Duncan Callaway, Claire Tomlin

We consider distribution systems with multiple controllable Distributed Energy Resources (DERs) and present a data-driven approach to learn control policies for each DER to reconstruct and mimic the solution to a centralized OPF problem from solely locally available information.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.