Search Results for author: Jiequn Han

Found 33 papers, 14 papers with code

Stochastic Optimal Control Matching

1 code implementation4 Dec 2023 Carles Domingo-Enrich, Jiequn Han, Brandon Amos, Joan Bruna, Ricky T. Q. Chen

Our work introduces Stochastic Optimal Control Matching (SOCM), a novel Iterative Diffusion Optimization (IDO) technique for stochastic optimal control that stems from the same philosophy as the conditional score matching loss for diffusion models.

Philosophy

Improving Gradient Computation for Differentiable Physics Simulation with Contacts

1 code implementation28 Apr 2023 Yaofeng Desmond Zhong, Jiequn Han, Biswadip Dey, Georgia Olympia Brikis

We find that existing differentiable simulation methods provide inaccurate gradients when the contact normal direction is not fixed - a general situation when the contacts are between two moving objects.

Reinforcement Learning with Function Approximation: From Linear to Nonlinear

no code implementations20 Feb 2023 Jihao Long, Jiequn Han

These results rely on the $L^\infty$ and UCB estimation of estimation error, which can handle the distribution mismatch phenomenon.

reinforcement-learning Reinforcement Learning (RL)

A Neural Network Warm-Start Approach for the Inverse Acoustic Obstacle Scattering Problem

1 code implementation16 Dec 2022 Mo Zhou, Jiequn Han, Manas Rachh, Carlos Borges

We present a neural network warm-start approach for solving the inverse scattering problem, where an initial guess for the optimization problem is obtained using a trained neural network.

Offline Supervised Learning V.S. Online Direct Policy Optimization: A Comparative Study and A Unified Training Paradigm for Neural Network-Based Optimal Feedback Control

1 code implementation29 Nov 2022 Yue Zhao, Jiequn Han

We first conduct a comparative study of two prevalent approaches: offline supervised learning and online direct policy optimization.

Pandemic Control, Game Theory and Machine Learning

no code implementations18 Aug 2022 Yao Xuan, Robert Balkin, Jiequn Han, Ruimeng Hu, Hector D. Ceniceros

Game theory has been an effective tool in the control of disease spread and in suggesting optimal policies at both individual and area levels.

Decision Making

Differentiable Physics Simulations with Contacts: Do They Have Correct Gradients w.r.t. Position, Velocity and Control?

1 code implementation8 Jul 2022 Yaofeng Desmond Zhong, Jiequn Han, Georgia Olympia Brikis

In recent years, an increasing amount of work has focused on differentiable physics simulation and has produced a set of open source projects such as Tiny Differentiable Simulator, Nimble Physics, diffTaichi, Brax, Warp, Dojo and DiffCoSim.

Position

Learning High-Dimensional McKean-Vlasov Forward-Backward Stochastic Differential Equations with General Distribution Dependence

1 code implementation25 Apr 2022 Jiequn Han, Ruimeng Hu, Jihao Long

These coefficient functions are used to approximate the MV-FBSDEs' model coefficients with full distribution dependence, and are updated by solving another supervising learning problem using training data simulated from the last iteration's FBSDE solutions.

DeepHAM: A Global Solution Method for Heterogeneous Agent Models with Aggregate Shocks

no code implementations29 Dec 2021 Jiequn Han, Yucheng Yang, Weinan E

An efficient, reliable, and interpretable global solution method, the Deep learning-based algorithm for Heterogeneous Agent Models (DeepHAM), is proposed for solving high dimensional heterogeneous agent models with aggregate shocks.

Frame invariance and scalability of neural operators for partial differential equations

no code implementations28 Dec 2021 Muhammad I. Zafar, Jiequn Han, Xu-Hui Zhou, Heng Xiao

Partial differential equations (PDEs) play a dominant role in the mathematical modeling of many complex dynamical processes.

Perturbational Complexity by Distribution Mismatch: A Systematic Analysis of Reinforcement Learning in Reproducing Kernel Hilbert Space

no code implementations5 Nov 2021 Jihao Long, Jiequn Han

As a byproduct, we show that when the reward functions lie in a high dimensional RKHS, even if the transition probability is known and the action space is finite, it is still possible for RL problems to suffer from the curse of dimensionality.

Reinforcement Learning (RL)

A Class of Dimension-free Metrics for the Convergence of Empirical Measures

no code implementations24 Apr 2021 Jiequn Han, Ruimeng Hu, Jihao Long

The proposed metrics fall into the category of integral probability metrics, for which we specify criteria of test function spaces to guarantee the property of being free of CoD.

An $L^2$ Analysis of Reinforcement Learning in High Dimensions with Kernel and Neural Network Approximation

no code implementations15 Apr 2021 Jihao Long, Jiequn Han, Weinan E

Reinforcement learning (RL) algorithms based on high-dimensional function approximation have achieved tremendous empirical success in large-scale problems with an enormous number of states.

Reinforcement Learning (RL)

Frame-independent vector-cloud neural network for nonlocal constitutive modeling on arbitrary grids

2 code implementations11 Mar 2021 Xu-Hui Zhou, Jiequn Han, Heng Xiao

As such, the network can deal with any number of arbitrarily arranged grid points and thus is suitable for unstructured meshes in fluid simulations.

Recurrent Neural Networks for Stochastic Control Problems with Delay

1 code implementation5 Jan 2021 Jiequn Han, Ruimeng Hu

Stochastic control problems with delay are challenging due to the path-dependent feature of the system and thus its intrinsic high dimensions.

Portfolio Optimization

Optimal Policies for a Pandemic: A Stochastic Game Approach and a Deep Learning Algorithm

no code implementations12 Dec 2020 Yao Xuan, Robert Balkin, Jiequn Han, Ruimeng Hu, Hector D. Ceniceros

Game theory has been an effective tool in the control of disease spread and in suggesting optimal policies at both individual and area levels.

On the Curse of Memory in Recurrent Neural Networks: Approximation and Optimization Analysis

no code implementations ICLR 2021 Zhong Li, Jiequn Han, Weinan E, Qianxiao Li

We study the approximation properties and optimization dynamics of recurrent neural networks (RNNs) when applied to learn input-output relationships in temporal data.

Global Convergence of Policy Gradient for Linear-Quadratic Mean-Field Control/Game in Continuous Time

no code implementations16 Aug 2020 Weichen Wang, Jiequn Han, Zhuoran Yang, Zhaoran Wang

Reinforcement learning is a powerful tool to learn the optimal policy of possibly multiple agents by interacting with the environment.

Convergence of Deep Fictitious Play for Stochastic Differential Games

no code implementations12 Aug 2020 Jiequn Han, Ruimeng Hu, Jihao Long

Stochastic differential games have been used extensively to model agents' competitions in Finance, for instance, in P2P lending platforms from the Fintech industry, the banking system for systemic risk, and insurance markets.

BIG-bench Machine Learning

Integrating Machine Learning with Physics-Based Modeling

no code implementations4 Jun 2020 Weinan E, Jiequn Han, Linfeng Zhang

Machine learning is poised as a very powerful tool that can drastically improve our ability to carry out scientific research.

BIG-bench Machine Learning

Escaping Saddle Points Efficiently with Occupation-Time-Adapted Perturbations

no code implementations9 May 2020 Xin Guo, Jiequn Han, Mahan Tajrobehkar, Wenpin Tang

Motivated by the super-diffusivity of self-repelling random walk, which has roots in statistical physics, this paper develops a new perturbation mechanism for optimization algorithms.

Solving high-dimensional eigenvalue problems using deep neural networks: A diffusion Monte Carlo like approach

no code implementations7 Feb 2020 Jiequn Han, Jianfeng Lu, Mo Zhou

We propose a new method to solve eigenvalue problems for linear and semilinear second order differential operators in high dimensions based on deep neural networks.

Deep Fictitious Play for Finding Markovian Nash Equilibrium in Multi-Agent Games

no code implementations4 Dec 2019 Jiequn Han, Ruimeng Hu

We propose a deep neural network-based algorithm to identify the Markovian Nash equilibrium of general large $N$-player stochastic differential games.

Convergence of the Deep BSDE Method for Coupled FBSDEs

no code implementations3 Nov 2018 Jiequn Han, Jihao Long

The recently proposed numerical algorithm, deep BSDE method, has shown remarkable performance in solving high-dimensional forward-backward stochastic differential equations (FBSDEs) and parabolic partial differential equations (PDEs).

Solving Many-Electron Schrödinger Equation Using Deep Neural Networks

no code implementations18 Jul 2018 Jiequn Han, Linfeng Zhang, Weinan E

We introduce a new family of trial wave-functions based on deep neural networks to solve the many-electron Schr\"odinger equation.

Computational Physics Chemical Physics

A Mean-Field Optimal Control Formulation of Deep Learning

no code implementations3 Jul 2018 Weinan E, Jiequn Han, Qianxiao Li

This paper introduces the mathematical formulation of the population risk minimization problem in deep learning as a mean-field optimal control problem.

End-to-end Symmetry Preserving Inter-atomic Potential Energy Model for Finite and Extended Systems

1 code implementation NeurIPS 2018 Linfeng Zhang, Jiequn Han, Han Wang, Wissam A. Saidi, Roberto Car, Weinan E

Machine learning models are changing the paradigm of molecular modeling, which is a fundamental tool for material science, chemistry, and computational biology.

Computational Physics Materials Science Chemical Physics

DeePMD-kit: A deep learning package for many-body potential energy representation and molecular dynamics

2 code implementations11 Dec 2017 Han Wang, Linfeng Zhang, Jiequn Han, Weinan E

Here we describe DeePMD-kit, a package written in Python/C++ that has been designed to minimize the effort required to build deep learning based representation of potential energy and force field and to perform molecular dynamics.

Deep Potential Molecular Dynamics: a scalable model with the accuracy of quantum mechanics

5 code implementations30 Jul 2017 Linfeng Zhang, Jiequn Han, Han Wang, Roberto Car, Weinan E

We introduce a scheme for molecular simulations, the Deep Potential Molecular Dynamics (DeePMD) method, based on a many-body potential and interatomic forces generated by a carefully crafted deep neural network trained with ab initio data.

Solving high-dimensional partial differential equations using deep learning

6 code implementations9 Jul 2017 Jiequn Han, Arnulf Jentzen, Weinan E

Developing algorithms for solving high-dimensional partial differential equations (PDEs) has been an exceedingly difficult task for a long time, due to the notoriously difficult problem known as the "curse of dimensionality".

Vocal Bursts Intensity Prediction

Deep Potential: a general representation of a many-body potential energy surface

1 code implementation5 Jul 2017 Jiequn Han, Linfeng Zhang, Roberto Car, Weinan E

When tested on a wide variety of examples, Deep Potential is able to reproduce the original model, whether empirical or quantum mechanics based, within chemical accuracy.

Computational Physics

Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations

5 code implementations15 Jun 2017 Weinan E, Jiequn Han, Arnulf Jentzen

We propose a new algorithm for solving parabolic partial differential equations (PDEs) and backward stochastic differential equations (BSDEs) in high dimension, by making an analogy between the BSDE and reinforcement learning with the gradient of the solution playing the role of the policy function, and the loss function given by the error between the prescribed terminal condition and the solution of the BSDE.

reinforcement-learning Reinforcement Learning (RL)

Deep Learning Approximation for Stochastic Control Problems

no code implementations2 Nov 2016 Jiequn Han, Weinan E

Many real world stochastic control problems suffer from the "curse of dimensionality".

Cannot find the paper you are looking for? You can Submit a new open access paper.