Search Results for author: Yongxin Chen

Found 43 papers, 10 papers with code

Proximal Oracles for Optimization and Sampling

no code implementations2 Apr 2024 Jiaming Liang, Yongxin Chen

Finally, we combine this proximal sampling oracle and ASF to obtain a Markov chain Monte Carlo method with non-asymptotic complexity bounds for sampling in semi-smooth and composite settings.

Toward effective protection against diffusion based mimicry through score distillation

1 code implementation2 Oct 2023 Haotian Xue, Chumeng Liang, Xiaoyu Wu, Yongxin Chen

In this work, we present novel findings on attacking latent diffusion models (LDM) and propose new plug-and-play strategies for more effective protection.

On the Contraction Coefficient of the Schrödinger Bridge for Stochastic Linear Systems

no code implementations12 Sep 2023 Alexis M. H. Teter, Yongxin Chen, Abhishek Halder

In this work, we study a priori estimates for the contraction coefficients associated with the convergence of respective Schr\"{o}dinger systems.

Data-Driven Adversarial Online Control for Unknown Linear Systems

no code implementations16 Aug 2023 Zishun Liu, Yongxin Chen

We consider the online control problem with an unknown linear dynamical system in the presence of adversarial perturbations and adversarial convex loss functions.

Improved Order Analysis and Design of Exponential Integrator for Diffusion Models Sampling

no code implementations4 Aug 2023 Qinsheng Zhang, Jiaming Song, Yongxin Chen

By reformulating the differential equations in DMs and capitalizing on the theory of exponential integrators, we propose refined EI solvers that fulfill all the order conditions, which we designate as Refined Exponential Solver (RES).

Diffusion-Based Adversarial Sample Generation for Improved Stealthiness and Controllability

1 code implementation NeurIPS 2023 Haotian Xue, Alexandre Araujo, Bin Hu, Yongxin Chen

Neural networks are known to be susceptible to adversarial samples: small variations of natural examples crafted to deliberately mislead the models.

DiffCollage: Parallel Generation of Large Content with Diffusion Models

no code implementations CVPR 2023 Qinsheng Zhang, Jiaming Song, Xun Huang, Yongxin Chen, Ming-Yu Liu

We present DiffCollage, a compositional diffusion model that can generate large content by leveraging diffusion models trained on generating pieces of the large content.

Infinite Image Generation

ReorientDiff: Diffusion Model based Reorientation for Object Manipulation

no code implementations28 Feb 2023 Utkarsh A. Mishra, Yongxin Chen

While certain goals can be achieved by picking and placing the objects of interest directly, object reorientation is needed for precise placement in most of the tasks.

Object

Improved dimension dependence of a proximal algorithm for sampling

no code implementations20 Feb 2023 Jiaojiao Fan, Bo Yuan, Yongxin Chen

For instance, for strongly log-concave distributions, our method has complexity bound $\tilde\mathcal{O}(\kappa d^{1/2})$ without warm start, better than the minimax bound for MALA.

Data-Driven Convex Approach to Off-road Navigation via Linear Transfer Operators

no code implementations3 Oct 2022 Joseph Moyalan, Yongxin Chen, Umesh Vaidya

We provide a convex formulation to the off-road navigation problem by lifting the problem to the density space using the linear Perron-Frobenius (P-F) operator.

Signed Graph Neural Networks: A Frequency Perspective

no code implementations15 Aug 2022 Rahul Singh, Yongxin Chen

Graph convolutional networks (GCNs) and its variants are designed for unsigned graphs containing only positive links.

Link Sign Prediction Node Classification

gDDIM: Generalized denoising diffusion implicit models

1 code implementation11 Jun 2022 Qinsheng Zhang, Molei Tao, Yongxin Chen

In the CLD, a diffusion model by augmenting the diffusion process with velocity, our algorithm achieves an FID score of 2. 26, on CIFAR10, with only 50 number of score function evaluations~(NFEs) and an FID score of 2. 86 with only 27 NFEs.

Denoising

A Proximal Algorithm for Sampling from Non-convex Potentials

no code implementations20 May 2022 Jiaming Liang, Yongxin Chen

This work extends the recent algorithm in \cite{LiaChe21, LiaChe22} for non-smooth/semi-smooth log-concave distribution to the setting with non-convex potentials.

Fast Sampling of Diffusion Models with Exponential Integrator

4 code implementations29 Apr 2022 Qinsheng Zhang, Yongxin Chen

Our goal is to develop a fast sampling method for DMs with a much less number of steps while retaining high sample quality.

Geometry of finite-time thermodynamic cycles with anisotropic thermal fluctuations

no code implementations23 Mar 2022 Olga Movilla Miangolarra, Amirhossein Taghvaei, Yongxin Chen, Tryphon T. Georgiou

In contrast to the classical concept of a Carnot engine that alternates contact between heat baths of different temperatures, naturally occurring processes usually harvest energy from anisotropy, being exposed simultaneously to chemical and thermal fluctuations of different intensities.

A Proximal Algorithm for Sampling

no code implementations28 Feb 2022 Jiaming Liang, Yongxin Chen

Departing from the standard smooth setting, the potentials are only assumed to be weakly smooth or non-smooth, or the summation of multiple such functions.

Improved analysis for a proximal algorithm for sampling

no code implementations13 Feb 2022 Yongxin Chen, Sinho Chewi, Adil Salim, Andre Wibisono

We study the proximal sampler of Lee, Shen, and Tian (2021) and obtain new convergence guarantees under weaker assumptions than strong log-concavity: namely, our results hold for (1) weakly log-concave targets, and (2) targets satisfying isoperimetric assumptions which allow for non-log-concavity.

Variational Wasserstein gradient flow

1 code implementation4 Dec 2021 Jiaojiao Fan, Qinsheng Zhang, Amirhossein Taghvaei, Yongxin Chen

Wasserstein gradient flow has emerged as a promising approach to solve optimization problems over the space of probability distributions.

Path Integral Sampler: a stochastic control approach for sampling

1 code implementation ICLR 2022 Qinsheng Zhang, Yongxin Chen

The PIS is built on the Schr\"odinger bridge problem which aims to recover the most likely evolution of a diffusion process given its initial distribution and terminal distribution.

Diffusion Normalizing Flow

1 code implementation NeurIPS 2021 Qinsheng Zhang, Yongxin Chen

Our method is closely related to normalizing flow and diffusion probabilistic models and can be viewed as a combination of the two.

Density Estimation Image Generation

A Proximal Algorithm for Sampling from Non-smooth Potentials

no code implementations9 Oct 2021 Jiaming Liang, Yongxin Chen

One key contribution of this work is a fast algorithm that realizes the restricted Gaussian oracle for any convex non-smooth potential with bounded Lipschitz constant.

Inference of collective Gaussian hidden Markov models

no code implementations24 Jul 2021 Rahul Singh, Yongxin Chen

We consider inference problems for a class of continuous state collective hidden Markov models, where the data is recorded in aggregate (collective) form generated by a large population of individuals following the same dynamics.

Neural Monge Map estimation and its applications

1 code implementation7 Jun 2021 Jiaojiao Fan, Shu Liu, Shaojun Ma, Haomin Zhou, Yongxin Chen

Monge map refers to the optimal transport map between two probability distributions and provides a principled approach to transform one distribution to another.

Image Inpainting Text-to-Image Generation

Learning High Dimensional Wasserstein Geodesics

no code implementations5 Feb 2021 Shu Liu, Shaojun Ma, Yongxin Chen, Hongyuan Zha, Haomin Zhou

We propose a new formulation and learning strategy for computing the Wasserstein geodesic between two probability distributions in high dimensions.

Vocal Bursts Intensity Prediction

Variational Transport: A Convergent Particle-BasedAlgorithm for Distributional Optimization

no code implementations21 Dec 2020 Zhuoran Yang, Yufeng Zhang, Yongxin Chen, Zhaoran Wang

Specifically, we prove that moving along the geodesic in the direction of functional gradient with respect to the second-order Wasserstein distance is equivalent to applying a pushforward mapping to a probability distribution, which can be approximated accurately by pushing a set of particles.

Generative Adversarial Network Variational Inference

Can Temporal-Difference and Q-Learning Learn Representation? A Mean-Field Theory

no code implementations NeurIPS 2020 Yufeng Zhang, Qi Cai, Zhuoran Yang, Yongxin Chen, Zhaoran Wang

Temporal-difference and Q-learning play a key role in deep reinforcement learning, where they are empowered by expressive nonlinear function approximators such as neural networks.

Q-Learning reinforcement-learning +1

Learning Hidden Markov Models from Aggregate Observations

no code implementations23 Nov 2020 Rahul Singh, Qinsheng Zhang, Yongxin Chen

This problem arises when only the population level counts of the number of individuals at each time step are available, from which one seeks to learn the individual hidden Markov model.

Filtering for Aggregate Hidden Markov Models with Continuous Observations

no code implementations4 Nov 2020 Qinsheng Zhang, Rahul Singh, Yongxin Chen

We consider a class of filtering problems for large populations where each individual is modeled by the same hidden Markov model (HMM).

Scalable Computations of Wasserstein Barycenter via Input Convex Neural Networks

2 code implementations8 Jul 2020 Jiaojiao Fan, Amirhossein Taghvaei, Yongxin Chen

Wasserstein Barycenter is a principled approach to represent the weighted mean of a given set of probability distributions, utilizing the geometry induced by optimal transport.

Incremental inference of collective graphical models

no code implementations26 Jun 2020 Rahul Singh, Isabel Haasler, Qinsheng Zhang, Johan Karlsson, Yongxin Chen

We consider incremental inference problems from aggregate data for collective dynamics.

Multi-marginal optimal transport and probabilistic graphical models

3 code implementations25 Jun 2020 Isabel Haasler, Rahul Singh, Qinsheng Zhang, Johan Karlsson, Yongxin Chen

We study multi-marginal optimal transport problems from a probabilistic graphical model perspective.

Bayesian Inference

Can Temporal-Difference and Q-Learning Learn Representation? A Mean-Field Theory

no code implementations8 Jun 2020 Yufeng Zhang, Qi Cai, Zhuoran Yang, Yongxin Chen, Zhaoran Wang

We aim to answer the following questions: When the function approximator is a neural network, how does the associated feature representation evolve?

Q-Learning

Improving Robustness via Risk Averse Distributional Reinforcement Learning

no code implementations L4DC 2020 Rahul Singh, Qinsheng Zhang, Yongxin Chen

One major obstacle that precludes the success of reinforcement learning in real-world applications is the lack of robustness, either to model uncertainties or external disturbances, of the trained policies.

Distributional Reinforcement Learning reinforcement-learning +1

Inference with Aggregate Data: An Optimal Transport Approach

no code implementations31 Mar 2020 Rahul Singh, Isabel Haasler, Qinsheng Zhang, Johan Karlsson, Yongxin Chen

Consequently, the celebrated Sinkhorn/iterative scaling algorithm for multi-marginal optimal transport can be leveraged together with the standard belief propagation algorithm to establish an efficient inference scheme which we call Sinkhorn belief propagation (SBP).

Sample-based Distributional Policy Gradient

no code implementations8 Jan 2020 Rahul Singh, Keuntaek Lee, Yongxin Chen

It relies on the key idea of replacing the expected return with the return distribution, which captures the intrinsic randomness of the long term rewards.

Distributional Reinforcement Learning OpenAI Gym +2

Actor-Critic Provably Finds Nash Equilibria of Linear-Quadratic Mean-Field Games

no code implementations ICLR 2020 Zuyue Fu, Zhuoran Yang, Yongxin Chen, Zhaoran Wang

We study discrete-time mean-field Markov games with infinite numbers of agents where each agent aims to minimize its ergodic cost.

Understand the dynamics of GANs via Primal-Dual Optimization

no code implementations ICLR 2019 Songtao Lu, Rahul Singh, Xiangyi Chen, Yongxin Chen, Mingyi Hong

By developing new primal-dual optimization tools, we show that, with a proper stepsize choice, the widely used first-order iterative algorithm in training GANs would in fact converge to a stationary solution with a sublinear rate.

Generative Adversarial Network Multi-Task Learning

Probabilistic Kernel Support Vector Machines

no code implementations14 Apr 2019 Yongxin Chen, Tryphon T. Georgiou, Allen R. Tannenbaum

We propose a probabilistic enhancement of standard kernel Support Vector Machines for binary classification, in order to address the case when, along with given data sets, a description of uncertainty (e. g., error bounds) may be available on each datum.

Binary Classification

Hybrid Block Successive Approximation for One-Sided Non-Convex Min-Max Problems: Algorithms and Applications

no code implementations21 Feb 2019 Songtao Lu, Ioannis Tsaknakis, Mingyi Hong, Yongxin Chen

In this work, we consider a block-wise one-sided non-convex min-max problem, in which the minimization problem consists of multiple blocks and is non-convex, while the maximization problem is (strongly) concave.

On the Global Convergence of Imitation Learning: A Case for Linear Quadratic Regulator

no code implementations11 Jan 2019 Qi Cai, Mingyi Hong, Yongxin Chen, Zhaoran Wang

We study the global convergence of generative adversarial imitation learning for linear quadratic regulators, which is posed as minimax optimization.

Imitation Learning reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.