no code implementations • 25 May 2024 • Yuchen Zhu, Tianrong Chen, Lingkai Kong, Evangelos A. Theodorou, Molei Tao
However, our trivialization technique creates to a new momentum variable that stays in a simple $\textbf{fixed vector space}$.
no code implementations • 20 Apr 2024 • Chenru Duan, Guan-Horng Liu, Yuanqi Du, Tianrong Chen, Qiyuan Zhao, Haojun Jia, Carla P. Gomes, Evangelos A. Theodorou, Heather J. Kulik
The RMSD and barrier height error is further improved by roughly 25% through pretraining React-OT on a large reaction dataset obtained with a lower level of theory, GFN2-xTB.
no code implementations • 9 Apr 2024 • Yuchen Zhu, Tianrong Chen, Evangelos A. Theodorou, Xie Chen, Molei Tao
This article considers the generative modeling of the (mixed) states of quantum systems, and an approach based on denoising diffusion model is proposed.
no code implementations • 12 Nov 2023 • Valentin De Bortoli, Guan-Horng Liu, Tianrong Chen, Evangelos A. Theodorou, Weilie Nie
In this paper, we highlight that while flow and bridge matching processes preserve the information of the marginal distributions, they do \emph{not} necessarily preserve the coupling information unless additional, stronger optimality conditions are met.
1 code implementation • 11 Oct 2023 • Tianrong Chen, Jiatao Gu, Laurent Dinh, Evangelos A. Theodorou, Joshua Susskind, Shuangfei Zhai
In this work, we introduce a novel generative modeling framework grounded in \textbf{phase space dynamics}, where a phase space is defined as {an augmented space encompassing both position and velocity.}
1 code implementation • NeurIPS 2023 • Guan-Horng Liu, Tianrong Chen, Evangelos A. Theodorou, Molei Tao
In this work, we propose Mirror Diffusion Models (MDM), a new class of diffusion models that generate data on convex constrained sets without losing any tractability.
1 code implementation • 20 Sep 2022 • Guan-Horng Liu, Tianrong Chen, Oswin So, Evangelos A. Theodorou
In this work, we aim at solving a challenging class of MFGs in which the differentiability of these interacting preferences may not be available to the solver, and the population is urged to converge exactly to some desired distribution.
no code implementations • 5 Apr 2022 • Tianrong Chen, Ziyi Wang, Evangelos A. Theodorou
Our approach relies on the probabilistic representation of the solution of the Hamilton-Jacobi-Bellman partial differential equation.
1 code implementation • ICLR 2022 • Tianrong Chen, Guan-Horng Liu, Evangelos A. Theodorou
However, it remains unclear whether the optimization principle of SB relates to the modern training of deep generative models, which often rely on constructing log-likelihood objectives. This raises questions on the suitability of SB models as a principled alternative for generative applications.
Ranked #51 on Image Generation on CIFAR-10
1 code implementation • NeurIPS 2021 • Guan-Horng Liu, Tianrong Chen, Evangelos A. Theodorou
We propose a novel second-order optimization framework for training the emerging deep continuous-time models, specifically the Neural Ordinary Differential Equations (Neural ODEs).
no code implementations • 8 May 2021 • Guan-Horng Liu, Tianrong Chen, Evangelos A. Theodorou
The connection between training deep neural networks (DNNs) and optimal control theory (OCT) has attracted considerable attention as a principled tool of algorithmic design.
no code implementations • 21 Nov 2020 • Tianrong Chen, Ziyi Wang, Ioannis Exarchos, Evangelos A. Theodorou
We showcase superior performance of our framework over the state-of-the-art deep fictitious play algorithm on an inter-bank lending/borrowing problem in terms of multiple metrics.
no code implementations • 28 Sep 2020 • Tianrong Chen, Ziyi Wang, Ioannis Exarchos, Evangelos Theodorou
In this paper we present a deep learning framework for solving large-scale multi-agent non-cooperative stochastic games using fictitious play.
no code implementations • 17 Jul 2020 • Guan-Horng Liu, Tianrong Chen, Evangelos A. Theodorou
Connections between Deep Neural Networks (DNNs) training and optimal control theory has attracted considerable attention as a principled tool of algorithmic design.
no code implementations • L4DC 2020 • Marcus Pereira, Ziyi Wang, Tianrong Chen, Emily Reed, Evangelos Theodorou
We present a deep recurrent neural network architecture to solve a class of stochastic optimal control problems described by fully nonlinear Hamilton Jacobi Bellman partial differential equations.
no code implementations • ICLR 2021 • Guan-Horng Liu, Tianrong Chen, Evangelos A. Theodorou
Interpretation of Deep Neural Networks (DNNs) training as an optimal control problem with nonlinear dynamical systems has received considerable attention recently, yet the algorithmic development remains relatively limited.
no code implementations • 25 Sep 2019 • Marcus Pereira, Ziyi Wang, Tianrong Chen, Evangelos Theodorou
We present a deep recurrent neural network architecture to solve a class of stochastic optimal control problems described by fully nonlinear Hamilton Jacobi Bellman partial differential equations.
no code implementations • 11 Jun 2019 • Marcus A. Pereira, Ziyi Wang, Tianrong Chen, Emily Reed, Evangelos A. Theodorou
We present a deep recurrent neural network architecture to solve a class of stochastic optimal control problems described by fully nonlinear Hamilton Jacobi Bellmanpartial differential equations.