no code implementations • 20 Apr 2024 • Chenru Duan, Guan-Horng Liu, Yuanqi Du, Tianrong Chen, Qiyuan Zhao, Haojun Jia, Carla P. Gomes, Evangelos A. Theodorou, Heather J. Kulik
The RMSD and barrier height error is further improved by roughly 25% through pretraining React-OT on a large reaction dataset obtained with a lower level of theory, GFN2-xTB.
no code implementations • 12 Nov 2023 • Valentin De Bortoli, Guan-Horng Liu, Tianrong Chen, Evangelos A. Theodorou, Weilie Nie
In this paper, we highlight that while flow and bridge matching processes preserve the information of the marginal distributions, they do \emph{not} necessarily preserve the coupling information unless additional, stronger optimality conditions are met.
1 code implementation • 3 Oct 2023 • Guan-Horng Liu, Yaron Lipman, Maximilian Nickel, Brian Karrer, Evangelos A. Theodorou, Ricky T. Q. Chen
Modern distribution matching algorithms for training diffusion or flow models directly prescribe the time evolution of the marginal distributions between two boundary distributions.
1 code implementation • NeurIPS 2023 • Guan-Horng Liu, Tianrong Chen, Evangelos A. Theodorou, Molei Tao
In this work, we propose Mirror Diffusion Models (MDM), a new class of diffusion models that generate data on convex constrained sets without losing any tractability.
1 code implementation • 23 Aug 2023 • Sascha Diefenbacher, Guan-Horng Liu, Vinicius Mikuni, Benjamin Nachman, Weili Nie
Machine learning-based unfolding has enabled unbinned and high-dimensional differential cross section measurements.
1 code implementation • 3 Jul 2023 • Lorenz Richter, Julius Berner, Guan-Horng Liu
Recently, a series of papers proposed deep learning-based approaches to sample from unnormalized target densities using controlled diffusion processes.
1 code implementation • 12 Feb 2023 • Guan-Horng Liu, Arash Vahdat, De-An Huang, Evangelos A. Theodorou, Weili Nie, Anima Anandkumar
We propose Image-to-Image Schr\"odinger Bridge (I$^2$SB), a new class of conditional diffusion models that directly learn the nonlinear diffusion processes between two given distributions.
1 code implementation • 20 Sep 2022 • Guan-Horng Liu, Tianrong Chen, Oswin So, Evangelos A. Theodorou
In this work, we aim at solving a challenging class of MFGs in which the differentiability of these interacting preferences may not be available to the solver, and the population is urged to converge exactly to some desired distribution.
1 code implementation • ICLR 2022 • Tianrong Chen, Guan-Horng Liu, Evangelos A. Theodorou
However, it remains unclear whether the optimization principle of SB relates to the modern training of deep generative models, which often rely on constructing log-likelihood objectives. This raises questions on the suitability of SB models as a principled alternative for generative applications.
Ranked #44 on Image Generation on CIFAR-10
1 code implementation • NeurIPS 2021 • Guan-Horng Liu, Tianrong Chen, Evangelos A. Theodorou
We propose a novel second-order optimization framework for training the emerging deep continuous-time models, specifically the Neural Ordinary Differential Equations (Neural ODEs).
no code implementations • 8 May 2021 • Guan-Horng Liu, Tianrong Chen, Evangelos A. Theodorou
The connection between training deep neural networks (DNNs) and optimal control theory (OCT) has attracted considerable attention as a principled tool of algorithmic design.
no code implementations • 1 Apr 2021 • Ziyi Wang, Oswin So, Jason Gibson, Bogdan Vlahov, Manan S. Gandhi, Guan-Horng Liu, Evangelos A. Theodorou
In this paper, we provide a generalized framework for Variational Inference-Stochastic Optimal Control by using thenon-extensive Tsallis divergence.
no code implementations • 17 Jul 2020 • Guan-Horng Liu, Tianrong Chen, Evangelos A. Theodorou
Connections between Deep Neural Networks (DNNs) training and optimal control theory has attracted considerable attention as a principled tool of algorithmic design.
no code implementations • ICLR 2021 • Guan-Horng Liu, Tianrong Chen, Evangelos A. Theodorou
Interpretation of Deep Neural Networks (DNNs) training as an optimal control problem with nonlinear dynamical systems has received considerable attention recently, yet the algorithmic development remains relatively limited.
1 code implementation • 28 Aug 2019 • Guan-Horng Liu, Evangelos A. Theodorou
In this article, we provide one possible way to align existing branches of deep learning theory through the lens of dynamical system and optimal control.
no code implementations • 30 May 2017 • Guan-Horng Liu, Avinash Siravuru, Sai Prabhakar, Manuela Veloso, George Kantor
Multisensory polices are known to enhance both state estimation and target tracking.