no code implementations • 2 Apr 2024 • Jiaming Liang, Yongxin Chen
Finally, we combine this proximal sampling oracle and ASF to obtain a Markov chain Monte Carlo method with non-asymptotic complexity bounds for sampling in semi-smooth and composite settings.
1 code implementation • 2 Oct 2023 • Haotian Xue, Chumeng Liang, Xiaoyu Wu, Yongxin Chen
In this work, we present novel findings on attacking latent diffusion models (LDM) and propose new plug-and-play strategies for more effective protection.
no code implementations • 12 Sep 2023 • Alexis M. H. Teter, Yongxin Chen, Abhishek Halder
In this work, we study a priori estimates for the contraction coefficients associated with the convergence of respective Schr\"{o}dinger systems.
no code implementations • 16 Aug 2023 • Zishun Liu, Yongxin Chen
We consider the online control problem with an unknown linear dynamical system in the presence of adversarial perturbations and adversarial convex loss functions.
no code implementations • 4 Aug 2023 • Qinsheng Zhang, Jiaming Song, Yongxin Chen
By reformulating the differential equations in DMs and capitalizing on the theory of exponential integrators, we propose refined EI solvers that fulfill all the order conditions, which we designate as Refined Exponential Solver (RES).
1 code implementation • NeurIPS 2023 • Haotian Xue, Alexandre Araujo, Bin Hu, Yongxin Chen
Neural networks are known to be susceptible to adversarial samples: small variations of natural examples crafted to deliberately mislead the models.
no code implementations • CVPR 2023 • Qinsheng Zhang, Jiaming Song, Xun Huang, Yongxin Chen, Ming-Yu Liu
We present DiffCollage, a compositional diffusion model that can generate large content by leveraging diffusion models trained on generating pieces of the large content.
no code implementations • 28 Feb 2023 • Utkarsh A. Mishra, Yongxin Chen
While certain goals can be achieved by picking and placing the objects of interest directly, object reorientation is needed for precise placement in most of the tasks.
no code implementations • 20 Feb 2023 • Jiaojiao Fan, Bo Yuan, Yongxin Chen
For instance, for strongly log-concave distributions, our method has complexity bound $\tilde\mathcal{O}(\kappa d^{1/2})$ without warm start, better than the minimax bound for MALA.
no code implementations • 3 Oct 2022 • Joseph Moyalan, Yongxin Chen, Umesh Vaidya
We provide a convex formulation to the off-road navigation problem by lifting the problem to the density space using the linear Perron-Frobenius (P-F) operator.
no code implementations • 15 Aug 2022 • Rahul Singh, Yongxin Chen
Graph convolutional networks (GCNs) and its variants are designed for unsigned graphs containing only positive links.
1 code implementation • 11 Jun 2022 • Qinsheng Zhang, Molei Tao, Yongxin Chen
In the CLD, a diffusion model by augmenting the diffusion process with velocity, our algorithm achieves an FID score of 2. 26, on CIFAR10, with only 50 number of score function evaluations~(NFEs) and an FID score of 2. 86 with only 27 NFEs.
no code implementations • 20 May 2022 • Jiaming Liang, Yongxin Chen
This work extends the recent algorithm in \cite{LiaChe21, LiaChe22} for non-smooth/semi-smooth log-concave distribution to the setting with non-convex potentials.
4 code implementations • 29 Apr 2022 • Qinsheng Zhang, Yongxin Chen
Our goal is to develop a fast sampling method for DMs with a much less number of steps while retaining high sample quality.
no code implementations • 23 Mar 2022 • Olga Movilla Miangolarra, Amirhossein Taghvaei, Yongxin Chen, Tryphon T. Georgiou
In contrast to the classical concept of a Carnot engine that alternates contact between heat baths of different temperatures, naturally occurring processes usually harvest energy from anisotropy, being exposed simultaneously to chemical and thermal fluctuations of different intensities.
no code implementations • 28 Feb 2022 • Jiaming Liang, Yongxin Chen
Departing from the standard smooth setting, the potentials are only assumed to be weakly smooth or non-smooth, or the summation of multiple such functions.
no code implementations • 13 Feb 2022 • Yongxin Chen, Sinho Chewi, Adil Salim, Andre Wibisono
We study the proximal sampler of Lee, Shen, and Tian (2021) and obtain new convergence guarantees under weaker assumptions than strong log-concavity: namely, our results hold for (1) weakly log-concave targets, and (2) targets satisfying isoperimetric assumptions which allow for non-log-concavity.
1 code implementation • 4 Dec 2021 • Jiaojiao Fan, Qinsheng Zhang, Amirhossein Taghvaei, Yongxin Chen
Wasserstein gradient flow has emerged as a promising approach to solve optimization problems over the space of probability distributions.
1 code implementation • ICLR 2022 • Qinsheng Zhang, Yongxin Chen
The PIS is built on the Schr\"odinger bridge problem which aims to recover the most likely evolution of a diffusion process given its initial distribution and terminal distribution.
1 code implementation • NeurIPS 2021 • Qinsheng Zhang, Yongxin Chen
Our method is closely related to normalizing flow and diffusion probabilistic models and can be viewed as a combination of the two.
no code implementations • 9 Oct 2021 • Jiaming Liang, Yongxin Chen
One key contribution of this work is a fast algorithm that realizes the restricted Gaussian oracle for any convex non-smooth potential with bounded Lipschitz constant.
no code implementations • 1 Oct 2021 • Jiaojiao Fan, Isabel Haasler, Johan Karlsson, Yongxin Chen
Multi-marginal optimal transport (MOT) is a generalization of optimal transport to multiple marginals.
no code implementations • 24 Jul 2021 • Rahul Singh, Yongxin Chen
We consider inference problems for a class of continuous state collective hidden Markov models, where the data is recorded in aggregate (collective) form generated by a large population of individuals following the same dynamics.
1 code implementation • 7 Jun 2021 • Jiaojiao Fan, Shu Liu, Shaojun Ma, Haomin Zhou, Yongxin Chen
Monge map refers to the optimal transport map between two probability distributions and provides a principled approach to transform one distribution to another.
no code implementations • 5 Feb 2021 • Shu Liu, Shaojun Ma, Yongxin Chen, Hongyuan Zha, Haomin Zhou
We propose a new formulation and learning strategy for computing the Wasserstein geodesic between two probability distributions in high dimensions.
no code implementations • 21 Dec 2020 • Zhuoran Yang, Yufeng Zhang, Yongxin Chen, Zhaoran Wang
Specifically, we prove that moving along the geodesic in the direction of functional gradient with respect to the second-order Wasserstein distance is equivalent to applying a pushforward mapping to a probability distribution, which can be approximated accurately by pushing a set of particles.
no code implementations • NeurIPS 2020 • Yufeng Zhang, Qi Cai, Zhuoran Yang, Yongxin Chen, Zhaoran Wang
Temporal-difference and Q-learning play a key role in deep reinforcement learning, where they are empowered by expressive nonlinear function approximators such as neural networks.
no code implementations • 23 Nov 2020 • Rahul Singh, Qinsheng Zhang, Yongxin Chen
This problem arises when only the population level counts of the number of individuals at each time step are available, from which one seeks to learn the individual hidden Markov model.
no code implementations • 4 Nov 2020 • Qinsheng Zhang, Rahul Singh, Yongxin Chen
We consider a class of filtering problems for large populations where each individual is modeled by the same hidden Markov model (HMM).
2 code implementations • 8 Jul 2020 • Jiaojiao Fan, Amirhossein Taghvaei, Yongxin Chen
Wasserstein Barycenter is a principled approach to represent the weighted mean of a given set of probability distributions, utilizing the geometry induced by optimal transport.
no code implementations • 26 Jun 2020 • Rahul Singh, Isabel Haasler, Qinsheng Zhang, Johan Karlsson, Yongxin Chen
We consider incremental inference problems from aggregate data for collective dynamics.
3 code implementations • 25 Jun 2020 • Isabel Haasler, Rahul Singh, Qinsheng Zhang, Johan Karlsson, Yongxin Chen
We study multi-marginal optimal transport problems from a probabilistic graphical model perspective.
no code implementations • 8 Jun 2020 • Yufeng Zhang, Qi Cai, Zhuoran Yang, Yongxin Chen, Zhaoran Wang
We aim to answer the following questions: When the function approximator is a neural network, how does the associated feature representation evolve?
no code implementations • L4DC 2020 • Rahul Singh, Qinsheng Zhang, Yongxin Chen
One major obstacle that precludes the success of reinforcement learning in real-world applications is the lack of robustness, either to model uncertainties or external disturbances, of the trained policies.
Distributional Reinforcement Learning reinforcement-learning +1
no code implementations • 31 Mar 2020 • Rahul Singh, Isabel Haasler, Qinsheng Zhang, Johan Karlsson, Yongxin Chen
Consequently, the celebrated Sinkhorn/iterative scaling algorithm for multi-marginal optimal transport can be leveraged together with the standard belief propagation algorithm to establish an efficient inference scheme which we call Sinkhorn belief propagation (SBP).
no code implementations • 8 Jan 2020 • Rahul Singh, Keuntaek Lee, Yongxin Chen
It relies on the key idea of replacing the expected return with the return distribution, which captures the intrinsic randomness of the long term rewards.
no code implementations • NeurIPS 2019 • Zhuoran Yang, Yongxin Chen, Mingyi Hong, Zhaoran Wang
Despite the empirical success of the actor-critic algorithm, its theoretical understanding lags behind.
no code implementations • ICLR 2020 • Zuyue Fu, Zhuoran Yang, Yongxin Chen, Zhaoran Wang
We study discrete-time mean-field Markov games with infinite numbers of agents where each agent aims to minimize its ergodic cost.
no code implementations • 14 Jul 2019 • Zhuoran Yang, Yongxin Chen, Mingyi Hong, Zhaoran Wang
Despite the empirical success of the actor-critic algorithm, its theoretical understanding lags behind.
no code implementations • ICLR 2019 • Songtao Lu, Rahul Singh, Xiangyi Chen, Yongxin Chen, Mingyi Hong
By developing new primal-dual optimization tools, we show that, with a proper stepsize choice, the widely used first-order iterative algorithm in training GANs would in fact converge to a stationary solution with a sublinear rate.
no code implementations • 14 Apr 2019 • Yongxin Chen, Tryphon T. Georgiou, Allen R. Tannenbaum
We propose a probabilistic enhancement of standard kernel Support Vector Machines for binary classification, in order to address the case when, along with given data sets, a description of uncertainty (e. g., error bounds) may be available on each datum.
no code implementations • 21 Feb 2019 • Songtao Lu, Ioannis Tsaknakis, Mingyi Hong, Yongxin Chen
In this work, we consider a block-wise one-sided non-convex min-max problem, in which the minimization problem consists of multiple blocks and is non-convex, while the maximization problem is (strongly) concave.
no code implementations • 11 Jan 2019 • Qi Cai, Mingyi Hong, Yongxin Chen, Zhaoran Wang
We study the global convergence of generative adversarial imitation learning for linear quadratic regulators, which is posed as minimax optimization.