Search Results for author: Amartya Mukherjee

Found 7 papers, 3 papers with code

Manifold-Guided Lyapunov Control with Diffusion Models

1 code implementation26 Mar 2024 Amartya Mukherjee, Thanin Quartz, Jun Liu

This paper presents a novel approach to generating stabilizing controllers for a large class of dynamical systems using diffusion models.

Physics-Informed Neural Network Policy Iteration: Algorithms, Convergence, and Verification

no code implementations15 Feb 2024 Yiming Meng, Ruikun Zhou, Amartya Mukherjee, Maxwell Fitzsimmons, Christopher Song, Jun Liu

We provide a theoretical analysis of both algorithms in terms of convergence of neural approximations towards the true optimal solutions in a general setting.

Denoising Diffusion Restoration Tackles Forward and Inverse Problems for the Laplace Operator

no code implementations13 Feb 2024 Amartya Mukherjee, Melissa M. Stadt, Lena Podina, Mohammad Kohandel, Jun Liu

Equivalently, we present an approach to restore the solution and the parameters in the Poisson equation by exploiting the eigenvalues and the eigenfunctions of the Laplacian operator.

Denoising

Harmonic Control Lyapunov Barrier Functions for Constrained Optimal Control with Reach-Avoid Specifications

no code implementations4 Oct 2023 Amartya Mukherjee, Ruikun Zhou, Haocheng Chang, Jun Liu

This paper introduces harmonic control Lyapunov barrier functions (harmonic CLBF) that aid in constrained control problems such as reach-avoid problems.

Actor-Critic Methods using Physics-Informed Neural Networks: Control of a 1D PDE Model for Fluid-Cooled Battery Packs

1 code implementation18 May 2023 Amartya Mukherjee, Jun Liu

The Hamilton-Jacobi-Bellman (HJB) equation is a PDE that evaluates the optimality of the value function and determines an optimal controller.

Bridging Physics-Informed Neural Networks with Reinforcement Learning: Hamilton-Jacobi-Bellman Proximal Policy Optimization (HJBPPO)

no code implementations1 Feb 2023 Amartya Mukherjee, Jun Liu

The Proximal Policy Optimization (PPO)-Clipped algorithm is improvised with this implementation as it uses a value network to compute the objective function for its policy network.

reinforcement-learning Reinforcement Learning (RL)

A Comparison of Reward Functions in Q-Learning Applied to a Cart Position Problem

1 code implementation25 May 2021 Amartya Mukherjee

Growing advancements in reinforcement learning has led to advancements in control theory.

Position Q-Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.