Search Results for author: Brandon Foggo

Found 7 papers, 1 papers with code

pmuBAGE: The Benchmarking Assortment of Generated PMU Data for Power System Events

no code implementations25 Oct 2022 Brandon Foggo, Koji Yamashita, Nanpeng Yu

This paper introduces pmuGE (phasor measurement unit Generator of Events), one of the first data-driven generative model for power system event data.

Benchmarking

pmuBAGE: The Benchmarking Assortment of Generated PMU Data for Power System Events -- Part I: Overview and Results

1 code implementation3 Apr 2022 Brandon Foggo, Koji Yamashita, Nanpeng Yu

We have trained this model on thousands of actual events and created a dataset denoted pmuBAGE (the Benchmarking Assortment of Generated PMU Events).

Benchmarking

Power System Event Identification based on Deep Neural Network with Information Loading

no code implementations13 Nov 2020 Jie Shi, Brandon Foggo, Nanpeng Yu

Online power system event identification and classification is crucial to enhancing the reliability of transmission systems.

Classification General Classification +1

On the Maximum Mutual Information Capacity of Neural Architectures

no code implementations10 Jun 2020 Brandon Foggo, Nanpeng Yu

We derive the closed-form expression of the maximum mutual information - the maximum value of $I(X;Z)$ obtainable via training - for a broad family of neural network architectures.

Learning Theory

Improving Supervised Phase Identification Through the Theory of Information Losses

no code implementations4 Nov 2019 Brandon Foggo, Nanpeng Yu

This paper considers the problem of Phase Identification in power distribution systems.

Analyzing Data Selection Techniques with Tools from the Theory of Information Losses

no code implementations25 Feb 2019 Brandon Foggo, Nanpeng Yu

We use this framework to prove that two methods, Facility Location Selection and Transductive Experimental Design, reduce these losses.

Active Learning Experimental Design +2

Information Losses in Neural Classifiers from Sampling

no code implementations15 Feb 2019 Brandon Foggo, Nanpeng Yu, Jie Shi, Yuanqi Gao

It then bounds this expected total variation as a function of the size of randomly sampled datasets in a fairly general setting, and without bringing in any additional dependence on model complexity.

Active Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.