Global Performance Guarantees for Neural Network Models of AC Power Flow

14 Nov 2022  ·  Samuel Chevalier, Spyros Chatzivasileiadis ·

Machine learning can generate black-box surrogate models which are both extremely fast and highly accurate. Rigorously verifying the accuracy of these black-box models, however, is computationally challenging. When it comes to power systems, learning AC power flow is the cornerstone of any machine learning surrogate model wishing to drastically accelerate computations, whether it is for optimization, control, or dynamics. This paper develops for the first time, to our knowledge, a tractable neural network verification procedure which incorporates the ground truth of the non-linear AC power flow equations to determine worst-case neural network performance. Our approach, termed Sequential Targeted Tightening (STT), leverages a loosely convexified reformulation of the original verification problem, which is a mixed integer quadratic program (MIQP). Using the sequential addition of targeted cuts, we iteratively tighten our formulation until either the solution is sufficiently tight or a satisfactory performance guarantee has been generated. After learning neural network models of the 14, 57, 118, and 200-bus PGLib test cases, we compare the performance guarantees generated by our STT procedure with ones generated by a state-of-the-art MIQP solver, Gurobi 9.5. We show that STT often generates performance guarantees which are orders of magnitude tighter than the MIQP upper bound.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods