Search Results for author: Michael Chertkov

Found 38 papers, 3 papers with code

Model Reduction of Swing Equations with Physics Informed PDE

no code implementations26 Oct 2021 Laurent Pagnier, Michael Chertkov, Julian Fritzsch, Philippe Jacquod

Such dynamics is normally modeled on seconds-to-tens-of-seconds time scales by the so-called swing equations, which are ordinary differential equations defined on a spatially discrete model of the power grid.

Physics-informed machine learning

Physics Informed Machine Learning of SPH: Machine Learning Lagrangian Turbulence

1 code implementation25 Oct 2021 Michael Woodward, Yifeng Tian, Criston Hyett, Chris Fryer, Daniel Livescu, Mikhail Stepanov, Michael Chertkov

Smoothed particle hydrodynamics (SPH) is a mesh-free Lagrangian method for obtaining approximate numerical solutions of the equations of fluid dynamics; which has been widely applied to weakly- and strongly compressible turbulence in astrophysics and engineering applications.

Physics-informed machine learning

Which Neural Network to Choose for Post-Fault Localization, Dynamic State Estimation and Optimal Measurement Placement in Power Systems?

no code implementations7 Apr 2021 Andrei Afonin, Michael Chertkov

We consider a power transmission system monitored with Phasor Measurement Units (PMUs) placed at significant, but not all, nodes of the system.

Fault localization

Embedding Power Flow into Machine Learning for Parameter and State Estimation

no code implementations26 Mar 2021 Laurent Pagnier, Michael Chertkov

Modern state and parameter estimations in power systems consist of two stages: the outer problem of minimizing the mismatch between network observation and prediction over the network parameters, and the inner problem of predicting the system state for given values of the parameters.

Message Passing Descent for Efficient Machine Learning

no code implementations16 Feb 2021 Francesco Concetti, Michael Chertkov

We suggest the {\bf Message Passage Descent} algorithm which relies on the piece-wise-polynomial representation of the model DF function.

Physics-Informed Graphical Neural Network for Parameter & State Estimations in Power Systems

no code implementations12 Feb 2021 Laurent Pagnier, Michael Chertkov

Parameter Estimation (PE) and State Estimation (SE) are the most wide-spread tasks in the system engineering.

Neural Particle Image Velocimetry

no code implementations28 Jan 2021 Nikolay Stulov, Michael Chertkov

In this work, we introduce a convolutional neural network adapted to the problem, namely Volumetric Correspondence Network (VCN) which was recently proposed for the end-to-end optical flow estimation in computer vision.

Optical Flow Estimation

Super-relaxation of space-time-quantized ensemble of energy loads to curtail their synchronization after demand response perturbation

no code implementations3 Aug 2020 Ilia Luchnikov, David Métivier, Henni Ouerdane, Michael Chertkov

However, this also results in the parasitic synchronization of individual devices within the ensemble leading to long post-demand-response oscillations in the integrated energy consumption of the ensemble.


A Hierarchical Approach to Multi-Energy Demand Response: From Electricity to Multi-Energy Applications

no code implementations5 May 2020 Ali Hassan, Samrat Acharya, Michael Chertkov, Deepjyoti Deka, Yury Dvorkin

Due to proliferation of energy efficiency measures and availability of the renewable energy resources, traditional energy infrastructure systems (electricity, heat, gas) can no longer be operated in a centralized manner under the assumption that consumer behavior is inflexible, i. e. cannot be adjusted in return for an adequate incentive.

Data-Driven Learning and Load Ensemble Control

no code implementations20 Apr 2020 Ali Hassan, Deepjyoti Deka, Michael Chertkov, Yury Dvorkin

Demand response (DR) programs aim to engage distributed small-scale flexible loads, such as thermostatically controllable loads (TCLs), to provide various grid support services.

Wavelet-Powered Neural Networks for Turbulence

no code implementations ICLR Workshop DeepDiffEq 2019 Arvind T. Mohan, Daniel Livescu, Michael Chertkov

One of the fundamental driving phenomena for applications in engineering, earth sciences and climate is fluid turbulence.

Embedding Hard Physical Constraints in Neural Network Coarse-Graining of 3D Turbulence

no code implementations31 Jan 2020 Arvind T. Mohan, Nicholas Lubbers, Daniel Livescu, Michael Chertkov

In the recent years, deep learning approaches have shown much promise in modeling complex systems in the physical sciences.

Computational Physics

Tractable Minor-free Generalization of Planar Zero-field Ising Models

no code implementations22 Oct 2019 Valerii Likhosherstov, Yury Maximov, Michael Chertkov

We present a new family of zero-field Ising models over $N$ binary variables/spins obtained by consecutive "gluing" of planar and $O(1)$-sized components and subsets of at most three vertices into a tree.

A New Family of Tractable Ising Models

no code implementations14 Jun 2019 Valerii Likhosherstov, Yury Maximov, Michael Chertkov

To illustrate the utility of the new family of tractable graphical models, we first build an $O(N^{3/2})$ algorithm for inference and sampling of the K5-minor-free zero-field Ising models - an extension of the planar zero-field Ising models - which is neither genus- nor treewidth-bounded.

Data Structures and Algorithms Statistical Mechanics Data Analysis, Statistics and Probability Computation

Compressed Convolutional LSTM: An Efficient Deep Learning framework to Model High Fidelity 3D Turbulence

no code implementations28 Feb 2019 Arvind Mohan, Don Daniel, Michael Chertkov, Daniel Livescu

High-fidelity modeling of turbulent flows is one of the major challenges in computational physics, with diverse applications in engineering, earth sciences and astrophysics, among many others.

Fluid Dynamics Chaotic Dynamics Computational Physics

Learning a Generator Model from Terminal Bus Data

no code implementations3 Jan 2019 Nikolay Stulov, Dejan J Sobajic, Yury Maximov, Deepjyoti Deka, Michael Chertkov

In this work we investigate approaches to reconstruct generator models from measurements available at the generator terminal bus using machine learning (ML) techniques.

Inference and Sampling of $K_{33}$-free Ising Models

2 code implementations22 Dec 2018 Valerii Likhosherstov, Yury Maximov, Michael Chertkov

We call an Ising model tractable when it is possible to compute its partition function value (statistical inference) in polynomial time.

Gauges, Loops, and Polynomials for Partition Functions of Graphical Models

no code implementations12 Nov 2018 Michael Chertkov, Vladimir Chernyak, Yury Maximov

We show that the Gauge Function has a natural polynomial representation in terms of gauges/variables associated with edges of the multi-graph.

From Deep to Physics-Informed Learning of Turbulence: Diagnostics

no code implementations16 Oct 2018 Ryan King, Oliver Hennigh, Arvind Mohan, Michael Chertkov

We describe tests validating progress made toward acceleration and automation of hydrodynamic codes in the regime of developed turbulence by three Deep Learning (DL) Neural Network (NN) schemes trained on Direct Numerical Simulations of turbulence.

Real-time Faulted Line Localization and PMU Placement in Power Systems through Convolutional Neural Networks

no code implementations11 Oct 2018 Wenting Li, Deepjyoti Deka, Michael Chertkov, Meng Wang

Diverse fault types, fast re-closures, and complicated transient states after a fault event make real-time fault location in power grids challenging.

Gauged Mini-Bucket Elimination for Approximate Inference

no code implementations5 Jan 2018 Sungsoo Ahn, Michael Chertkov, Jinwoo Shin, Adrian Weller

Recently, so-called gauge transformations were used to improve variational lower bounds on $Z$.

Online Learning of Power Transmission Dynamics

no code implementations27 Oct 2017 Andrey Y. Lokhov, Marc Vuffray, Dmitry Shemetov, Deepjyoti Deka, Michael Chertkov

We consider the problem of reconstructing the dynamic state matrix of transmission power grids from time-stamped PMU measurements in the regime of ambient fluctuations.

Belief Propagation Min-Sum Algorithm for Generalized Min-Cost Network Flow

no code implementations20 Oct 2017 Andrii Riazanov, Yury Maximov, Michael Chertkov

Belief Propagation algorithms are instruments used broadly to solve graphical model optimization and statistical inference problems.

Topology Estimation in Bulk Power Grids: Guarantees on Exact Recovery

no code implementations5 Jul 2017 Deepjyoti Deka, Saurav Talukdar, Michael Chertkov, Murti Salapaka

For grids that include cycles of length three, we provide sufficient conditions that ensure existence of algorithms for exact reconstruction.

Gauging Variational Inference

no code implementations NeurIPS 2017 Sungsoo Ahn, Michael Chertkov, Jinwoo Shin

Computing partition function is the most important statistical inference task arising in applications of Graphical Models (GM).

Variational Inference

Optimal structure and parameter learning of Ising models

1 code implementation15 Dec 2016 Andrey Y. Lokhov, Marc Vuffray, Sidhant Misra, Michael Chertkov

Reconstruction of structure and parameters of an Ising model from binary samples is a problem of practical importance in a variety of disciplines, ranging from statistical physics and computational biology to image processing and machine learning.

Synthesis of MCMC and Belief Propagation

no code implementations NeurIPS 2016 Sung-Soo Ahn, Michael Chertkov, Jinwoo Shin

In this paper, we introduce MCMC algorithms correcting the approximation error of BP, i. e., we provide a way to compensate for BP errors via a consecutive BP-aware MCMC.

Graphical Models for Optimal Power Flow

no code implementations21 Jun 2016 Krishnamurthy Dvijotham, Pascal Van Hentenryck, Michael Chertkov, Sidhant Misra, Marc Vuffray

In this paper, we formulate the optimal power flow problem over tree networks as an inference problem over a tree-structured graphical model where the nodal variables are low-dimensional vectors.

MCMC assisted by Belief Propagation

no code implementations29 May 2016 Sungsoo Ahn, Michael Chertkov, Jinwoo Shin

Furthermore, we also design an efficient rejection-free MCMC scheme for approximating the full series.

Interaction Screening: Efficient and Sample-Optimal Learning of Ising Models

no code implementations NeurIPS 2016 Marc Vuffray, Sidhant Misra, Andrey Y. Lokhov, Michael Chertkov

We prove that with appropriate regularization, the estimator recovers the underlying graph using a number of samples that is logarithmic in the system size p and exponential in the maximum coupling-intensity and maximum node-degree.

Minimum Weight Perfect Matching via Blossom Belief Propagation

no code implementations NeurIPS 2015 Sungsoo Ahn, Sejun Park, Michael Chertkov, Jinwoo Shin

Max-product Belief Propagation (BP) is a popular message-passing algorithm for computing a Maximum-A-Posteriori (MAP) assignment over a distribution represented by a Graphical Model (GM).

Combinatorial Optimization

Learning Planar Ising Models

no code implementations3 Feb 2015 Jason K. Johnson, Diane Oyen, Michael Chertkov, Praneeth Netrapalli

Inference and learning of graphical models are both well-studied problems in statistics and machine learning that have found many applications in science and engineering.

Approximate inference on planar graphs using Loop Calculus and Belief Propagation

no code implementations9 Aug 2014 Vicenc Gomez, Hilbert Kappen, Michael Chertkov

We introduce novel results for approximate inference on planar graphical models using the loop calculus framework.

Loop Calculus and Bootstrap-Belief Propagation for Perfect Matchings on Arbitrary Graphs

no code implementations5 Jun 2013 Michael Chertkov, Andrew Gelfand, Jinwoo Shin

This manuscript discusses computation of the Partition Function (PF) and the Minimum Weight Perfect Matching (MWPM) on arbitrary, non-bipartite graphs.

Belief Propagation for Linear Programming

no code implementations17 May 2013 Andrew Gelfand, Jinwoo Shin, Michael Chertkov

For this class of problems, MAP inference can be stated as an integer LP with an LP relaxation that coincides with minimization of the BFE at ``zero temperature".

Cannot find the paper you are looking for? You can Submit a new open access paper.