Search Results for author: Sihan Zeng

Found 10 papers, 1 papers with code

QCQP-Net: Reliably Learning Feasible Alternating Current Optimal Power Flow Solutions Under Constraints

no code implementations11 Jan 2024 Sihan Zeng, Youngdae Kim, Yuxuan Ren, Kibaek Kim

At the heart of power system operations, alternating current optimal power flow (ACOPF) studies the generation of electric power in the most economical way under network-wide load requirement, and can be formulated as a highly structured non-convex quadratically constrained quadratic program (QCQP).

Learning Payment-Free Resource Allocation Mechanisms

no code implementations18 Nov 2023 Sihan Zeng, Sujay Bhatt, Eleonora Kreacic, Parisa Hassanzadeh, Alec Koppel, Sumitra Ganesh

We consider the design of mechanisms that allocate limited resources among self-interested agents using neural networks.

Fairness

Sequential Fair Resource Allocation under a Markov Decision Process Framework

no code implementations10 Jan 2023 Parisa Hassanzadeh, Eleonora Kreacic, Sihan Zeng, Yuchen Xiao, Sumitra Ganesh

We propose a new algorithm, SAFFE, that makes fair allocations with respect to the entire demands revealed over the horizon by accounting for expected future demands at each arrival time.

Decision Making Fairness

A Reinforcement Learning Approach to Parameter Selection for Distributed Optimal Power Flow

no code implementations22 Oct 2021 Sihan Zeng, Alyssa Kody, Youngdae Kim, Kibaek Kim, Daniel K. Molzahn

We train our RL policy using deep Q-learning, and show that this policy can result in significantly accelerated convergence (up to a 59% reduction in the number of iterations compared to existing, curvature-informed penalty parameter selection methods).

Distributed Optimization Q-Learning +2

Finite-Time Complexity of Online Primal-Dual Natural Actor-Critic Algorithm for Constrained Markov Decision Processes

no code implementations21 Oct 2021 Sihan Zeng, Thinh T. Doan, Justin Romberg

To solve this constrained optimization program, we study an online actor-critic variant of a classic primal-dual method where the gradients of both the primal and dual functions are estimated using samples from a single trajectory generated by the underlying time-varying Markov processes.

A Two-Time-Scale Stochastic Optimization Framework with Applications in Control and Reinforcement Learning

no code implementations29 Sep 2021 Sihan Zeng, Thinh T. Doan, Justin Romberg

In our two-time-scale approach, one scale is to estimate the true gradient from these samples, which is then used to update the estimate of the optimal solution.

Reinforcement Learning (RL) Stochastic Optimization

Finite-Time Convergence Rates of Decentralized Stochastic Approximation with Applications in Multi-Agent and Multi-Task Learning

no code implementations28 Oct 2020 Sihan Zeng, Thinh T. Doan, Justin Romberg

We study a decentralized variant of stochastic approximation, a data-driven approach for finding the root of an operator under noisy measurements.

Multi-Task Learning Q-Learning +1

A Decentralized Policy Gradient Approach to Multi-task Reinforcement Learning

no code implementations8 Jun 2020 Sihan Zeng, Aqeel Anwar, Thinh Doan, Arijit Raychowdhury, Justin Romberg

We develop a mathematical framework for solving multi-task reinforcement learning (MTRL) problems based on a type of policy gradient method.

Atari Games Multi-Task Learning +3

Fast Compressive Sensing Recovery Using Generative Models with Structured Latent Variables

1 code implementation19 Feb 2019 Shaojie Xu, Sihan Zeng, Justin Romberg

Deep learning models have significantly improved the visual quality and accuracy on compressive sensing recovery.

Compressive Sensing

Cannot find the paper you are looking for? You can Submit a new open access paper.