Search Results for author: Chongxuan Li

Found 53 papers, 40 papers with code

Understanding and Stabilizing GANs' Training Dynamics Using Control Theory

no code implementations ICML 2020 Kun Xu, Chongxuan Li, Jun Zhu, Bo Zhang

There are existing efforts that model the training dynamics of GANs in the parameter space but the analysis cannot directly motivate practically effective stabilizing methods.

CRM: Single Image to 3D Textured Mesh with Convolutional Reconstruction Model

no code implementations8 Mar 2024 Zhengyi Wang, Yikai Wang, Yifei Chen, Chendong Xiang, Shuo Chen, Dajiang Yu, Chongxuan Li, Hang Su, Jun Zhu

In this work, we present the Convolutional Reconstruction Model (CRM), a high-fidelity feed-forward single image-to-3D generative model.

Image to 3D

Graph Diffusion Policy Optimization

1 code implementation26 Feb 2024 Yijing Liu, Chao Du, Tianyu Pang, Chongxuan Li, Wei Chen, Min Lin

Recent research has made significant progress in optimizing diffusion models for specific downstream objectives, which is an important pursuit in fields such as graph generation for drug design.

Graph Generation

The Blessing of Randomness: SDE Beats ODE in General Diffusion-based Image Editing

no code implementations2 Nov 2023 Shen Nie, Hanzhong Allan Guo, Cheng Lu, Yuhao Zhou, Chenyu Zheng, Chongxuan Li

We present a unified probabilistic formulation for diffusion-based image editing, where a latent variable is edited in a task-specific manner and generally deviates from the corresponding marginal distribution induced by the original stochastic or ordinary differential equation (SDE or ODE).

Image-to-Image Translation

BayesDiff: Estimating Pixel-wise Uncertainty in Diffusion via Bayesian Inference

1 code implementation17 Oct 2023 Siqi Kou, Lei Gan, Dequan Wang, Chongxuan Li, Zhijie Deng

In particular, we derive a novel uncertainty iteration principle to characterize the uncertainty dynamics in diffusion, and leverage the last-layer Laplace approximation for efficient Bayesian inference.

Bayesian Inference Image Generation

On Memorization in Diffusion Models

2 code implementations4 Oct 2023 Xiangming Gu, Chao Du, Tianyu Pang, Chongxuan Li, Min Lin, Ye Wang

Looking into this, we first observe that memorization behaviors tend to occur on smaller-sized datasets, which motivates our definition of effective model memorization (EMM), a metric measuring the maximum size of training data at which a learned diffusion model approximates its theoretical optimum.

Denoising Memorization

Inversion-by-Inversion: Exemplar-based Sketch-to-Photo Synthesis via Stochastic Differential Equations without Training

1 code implementation15 Aug 2023 XiMing Xing, Chuang Wang, Haitao Zhou, Zhihao Hu, Chongxuan Li, Dong Xu, Qian Yu

In the full-control inversion process, we propose an appearance-energy function to control the color and texture of the final generated photo. Importantly, our Inversion-by-Inversion pipeline is training-free and can accept different types of exemplars for color and texture control.

Image Generation

MissDiff: Training Diffusion Models on Tabular Data with Missing Values

no code implementations2 Jul 2023 Yidong Ouyang, Liyan Xie, Chongxuan Li, Guang Cheng

The diffusion model has shown remarkable performance in modeling data distributions and synthesizing data.

Denoising

ControlVideo: Conditional Control for One-shot Text-driven Video Editing and Beyond

1 code implementation26 May 2023 Min Zhao, Rongzhen Wang, Fan Bao, Chongxuan Li, Jun Zhu

This paper presents \emph{ControlVideo} for text-driven video editing -- generating a video that aligns with a given text while preserving the structure of the source video.

Text-to-Video Editing Video Editing

On Evaluating Adversarial Robustness of Large Vision-Language Models

1 code implementation NeurIPS 2023 Yunqing Zhao, Tianyu Pang, Chao Du, Xiao Yang, Chongxuan Li, Ngai-Man Cheung, Min Lin

Large vision-language models (VLMs) such as GPT-4 have achieved unprecedented performance in response generation, especially with visual inputs, enabling more creative and adaptable interaction than large language models such as ChatGPT.

Adversarial Robustness multimodal generation +1

ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation

2 code implementations NeurIPS 2023 Zhengyi Wang, Cheng Lu, Yikai Wang, Fan Bao, Chongxuan Li, Hang Su, Jun Zhu

In comparison, VSD works well with various CFG weights as ancestral sampling from diffusion models and simultaneously improves the diversity and sample quality with a common CFG weight (i. e., $7. 5$).

3D Generation Text to 3D

Towards Understanding Generalization of Macro-AUC in Multi-label Learning

1 code implementation9 May 2023 Guoqiang Wu, Chongxuan Li, Yilong Yin

We theoretically identify a critical factor of the dataset affecting the generalization bounds: \emph{the label-wise class imbalance}.

Generalization Bounds Multi-Label Learning

Contrastive Energy Prediction for Exact Energy-Guided Diffusion Sampling in Offline Reinforcement Learning

3 code implementations25 Apr 2023 Cheng Lu, Huayu Chen, Jianfei Chen, Hang Su, Chongxuan Li, Jun Zhu

The main challenge for this setting is that the intermediate guidance during the diffusion sampling procedure, which is jointly defined by the sampling distribution and the energy function, is unknown and is hard to estimate.

D4RL Image Generation +1

A Closer Look at Parameter-Efficient Tuning in Diffusion Models

1 code implementation31 Mar 2023 Chendong Xiang, Fan Bao, Chongxuan Li, Hang Su, Jun Zhu

Large-scale diffusion models like Stable Diffusion are powerful and find various real-world applications while customizing such models by fine-tuning is both memory and time inefficient.

Efficient Diffusion Personalization Position

One Transformer Fits All Distributions in Multi-Modal Diffusion at Scale

3 code implementations12 Mar 2023 Fan Bao, Shen Nie, Kaiwen Xue, Chongxuan Li, Shi Pu, Yaole Wang, Gang Yue, Yue Cao, Hang Su, Jun Zhu

Inspired by the unified view, UniDiffuser learns all distributions simultaneously with a minimal modification to the original diffusion model -- perturbs data in all modalities instead of a single modality, inputs individual timesteps in different modalities, and predicts the noise of all modalities instead of a single modality.

Text-to-Image Generation

Diffusion Models and Semi-Supervised Learners Benefit Mutually with Few Labels

2 code implementations NeurIPS 2023 Zebin You, Yong Zhong, Fan Bao, Jiacheng Sun, Chongxuan Li, Jun Zhu

In an effort to further advance semi-supervised generative and classification tasks, we propose a simple yet effective training strategy called dual pseudo training (DPT), built upon strong semi-supervised learners and diffusion models.

Classification

Revisiting Discriminative vs. Generative Classifiers: Theory and Implications

1 code implementation5 Feb 2023 Chenyu Zheng, Guoqiang Wu, Fan Bao, Yue Cao, Chongxuan Li, Jun Zhu

Theoretically, the paper considers the surrogate loss instead of the zero-one loss in analyses and generalizes the classical results from binary cases to multiclass ones.

Few-Shot Learning Image Classification +1

Why Are Conditional Generative Models Better Than Unconditional Ones?

no code implementations1 Dec 2022 Fan Bao, Chongxuan Li, Jiacheng Sun, Jun Zhu

Extensive empirical evidence demonstrates that conditional generative models are easier to train and perform better than unconditional ones by exploiting the labels of data.

Diffusion Denoising Process for Perceptron Bias in Out-of-distribution Detection

1 code implementation21 Nov 2022 Luping Liu, Yi Ren, Xize Cheng, Rongjie Huang, Chongxuan Li, Zhou Zhao

In this paper, we introduce a new perceptron bias assumption that suggests discriminator models are more sensitive to certain features of the input, leading to the overconfidence problem.

Denoising Out-of-Distribution Detection +1

DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models

1 code implementation2 Nov 2022 Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, Jun Zhu

The commonly-used fast sampler for guided sampling is DDIM, a first-order diffusion ODE solver that generally needs 100 to 250 steps for high-quality samples.

Text-to-Image Generation

Equivariant Energy-Guided SDE for Inverse Molecular Design

2 code implementations30 Sep 2022 Fan Bao, Min Zhao, Zhongkai Hao, Peiyao Li, Chongxuan Li, Jun Zhu

Inverse molecular design is critical in material science and drug discovery, where the generated molecules should satisfy certain desirable properties.

3D Molecule Generation Drug Discovery

All are Worth Words: A ViT Backbone for Diffusion Models

3 code implementations CVPR 2023 Fan Bao, Shen Nie, Kaiwen Xue, Yue Cao, Chongxuan Li, Hang Su, Jun Zhu

We evaluate U-ViT in unconditional and class-conditional image generation, as well as text-to-image generation tasks, where U-ViT is comparable if not superior to a CNN-based U-Net of a similar size.

Conditional Image Generation Text-to-Image Generation

Deep Generative Modeling on Limited Data with Regularization by Nontransferable Pre-trained Models

1 code implementation30 Aug 2022 Yong Zhong, Hongtao Liu, Xiaodong Liu, Fan Bao, Weiran Shen, Chongxuan Li

Deep generative models (DGMs) are data-eager because learning a complex model on limited data suffers from a large variance and easily overfits.

EGSDE: Unpaired Image-to-Image Translation via Energy-Guided Stochastic Differential Equations

1 code implementation14 Jul 2022 Min Zhao, Fan Bao, Chongxuan Li, Jun Zhu

Further, we provide an alternative explanation of the EGSDE as a product of experts, where each of the three experts (corresponding to the SDE and two feature extractors) solely contributes to faithfulness or realism.

Image-to-Image Translation Translation

Fast Lossless Neural Compression with Integer-Only Discrete Flows

1 code implementation17 Jun 2022 Siyu Wang, Jianfei Chen, Chongxuan Li, Jun Zhu, Bo Zhang

In this work, we propose Integer-only Discrete Flows (IODF), an efficient neural compressor with integer-only arithmetic.

Quantization

Maximum Likelihood Training for Score-Based Diffusion ODEs by High-Order Denoising Score Matching

1 code implementation16 Jun 2022 Cheng Lu, Kaiwen Zheng, Fan Bao, Jianfei Chen, Chongxuan Li, Jun Zhu

To fill up this gap, we show that the negative likelihood of the ODE can be bounded by controlling the first, second, and third-order score matching errors; and we further present a novel high-order denoising score matching method to enable maximum likelihood training of score-based diffusion ODEs.

Denoising

Estimating the Optimal Covariance with Imperfect Mean in Diffusion Probabilistic Models

1 code implementation15 Jun 2022 Fan Bao, Chongxuan Li, Jiacheng Sun, Jun Zhu, Bo Zhang

Thus, the generation performance on a subset of timesteps is crucial, which is greatly influenced by the covariance design in DPMs.

Computational Efficiency

Memory Replay with Data Compression for Continual Learning

1 code implementation ICLR 2022 Liyuan Wang, Xingxing Zhang, Kuo Yang, Longhui Yu, Chongxuan Li, Lanqing Hong, Shifeng Zhang, Zhenguo Li, Yi Zhong, Jun Zhu

In this work, we propose memory replay with data compression (MRDC) to reduce the storage cost of old training samples and thus increase their amount that can be stored in the memory buffer.

Autonomous Driving Class Incremental Learning +5

Analytic-DPM: an Analytic Estimate of the Optimal Reverse Variance in Diffusion Probabilistic Models

2 code implementations ICLR 2022 Fan Bao, Chongxuan Li, Jun Zhu, Bo Zhang

In this work, we present a surprising result that both the optimal reverse variance and the corresponding optimal KL divergence of a DPM have analytic forms w. r. t.

Stability and Generalization of Bilevel Programming in Hyperparameter Optimization

1 code implementation NeurIPS 2021 Fan Bao, Guoqiang Wu, Chongxuan Li, Jun Zhu, Bo Zhang

Our results can explain some mysterious behaviours of the bilevel programming in practice, for instance, overfitting to the validation set.

Hyperparameter Optimization

Rethinking and Reweighting the Univariate Losses for Multi-Label Ranking: Consistency and Generalization

no code implementations NeurIPS 2021 Guoqiang Wu, Chongxuan Li, Kun Xu, Jun Zhu

Our results show that learning algorithms with the consistent univariate loss have an error bound of $O(c)$ ($c$ is the number of labels), while algorithms with the inconsistent pairwise loss depend on $O(\sqrt{c})$ as shown in prior work.

Computational Efficiency Multi-Label Classification

MiCE: Mixture of Contrastive Experts for Unsupervised Image Clustering

1 code implementation ICLR 2021 Tsung Wei Tsai, Chongxuan Li, Jun Zhu

We present Mixture of Contrastive Experts (MiCE), a unified probabilistic clustering framework that simultaneously exploits the discriminative representations learned by contrastive learning and the semantic structures captured by a latent mixture model.

Clustering Contrastive Learning +1

Implicit Normalizing Flows

1 code implementation ICLR 2021 Cheng Lu, Jianfei Chen, Chongxuan Li, Qiuhao Wang, Jun Zhu

Through theoretical analysis, we show that the function space of ImpFlow is strictly richer than that of ResFlows.

Relaxed Conditional Image Transfer for Semi-supervised Domain Adaptation

no code implementations5 Jan 2021 Qijun Luo, Zhili Liu, Lanqing Hong, Chongxuan Li, Kuo Yang, Liyuan Wang, Fengwei Zhou, Guilin Li, Zhenguo Li, Jun Zhu

Semi-supervised domain adaptation (SSDA), which aims to learn models in a partially labeled target domain with the assistance of the fully labeled source domain, attracts increasing attention in recent years.

Domain Adaptation Semi-supervised Domain Adaptation

Variational (Gradient) Estimate of the Score Function in Energy-based Latent Variable Models

1 code implementation NeurIPS Workshop ICBINB 2020 Fan Bao, Kun Xu, Chongxuan Li, Lanqing Hong, Jun Zhu, Bo Zhang

The learning and evaluation of energy-based latent variable models (EBLVMs) without any structural assumptions are highly challenging, because the true posteriors and the partition functions in such models are generally intractable.

Bi-level Score Matching for Learning Energy-based Latent Variable Models

1 code implementation NeurIPS 2020 Fan Bao, Chongxuan Li, Kun Xu, Hang Su, Jun Zhu, Bo Zhang

This paper presents a bi-level score matching (BiSM) method to learn EBLVMs with general structures by reformulating SM as a bi-level optimization problem.

Rolling Shutter Correction Stochastic Optimization

Efficient Learning of Generative Models via Finite-Difference Score Matching

1 code implementation NeurIPS 2020 Tianyu Pang, Kun Xu, Chongxuan Li, Yang song, Stefano Ermon, Jun Zhu

Several machine learning applications involve the optimization of higher-order derivatives (e. g., gradients of gradients) during training, which can be expensive in respect to memory and computation even with automatic differentiation.

Triple Generative Adversarial Networks

1 code implementation20 Dec 2019 Chongxuan Li, Kun Xu, Jiashuo Liu, Jun Zhu, Bo Zhang

It is formulated as a three-player minimax game consisting of a generator, a classifier and a discriminator, and therefore is referred to as Triple Generative Adversarial Network (Triple-GAN).

Classification Conditional Image Generation +4

Understanding and Stabilizing GANs' Training Dynamics with Control Theory

1 code implementation29 Sep 2019 Kun Xu, Chongxuan Li, Jun Zhu, Bo Zhang

There are existing efforts that model the training dynamics of GANs in the parameter space but the analysis cannot directly motivate practically effective stabilizing methods.

Ranked #37 on Image Generation on CIFAR-10 (Inception score metric)

Image Generation L2 Regularization

Countering Noisy Labels By Learning From Auxiliary Clean Labels

no code implementations23 May 2019 Tsung Wei Tsai, Chongxuan Li, Jun Zhu

We consider the learning from noisy labels (NL) problem which emerges in many real-world applications.

To Relieve Your Headache of Training an MRF, Take AdVIL

no code implementations ICLR 2020 Chongxuan Li, Chao Du, Kun Xu, Max Welling, Jun Zhu, Bo Zhang

We propose a black-box algorithm called {\it Adversarial Variational Inference and Learning} (AdVIL) to perform inference and learning on a general Markov random field (MRF).

Variational Inference

Graphical Generative Adversarial Networks

1 code implementation NeurIPS 2018 Chongxuan Li, Max Welling, Jun Zhu, Bo Zhang

We propose Graphical Generative Adversarial Networks (Graphical-GAN) to model structured data.

Population Matching Discrepancy and Applications in Deep Learning

no code implementations NeurIPS 2017 Jianfei Chen, Chongxuan Li, Yizhong Ru, Jun Zhu

In this paper, we propose population matching discrepancy (PMD) for estimating the distribution distance based on samples, as well as an algorithm to learn the parameters of the distributions using PMD as an objective.

Domain Adaptation

Triple Generative Adversarial Nets

1 code implementation NeurIPS 2017 Chongxuan Li, Kun Xu, Jun Zhu, Bo Zhang

Generative Adversarial Nets (GANs) have shown promise in image generation and semi-supervised learning (SSL).

Image Generation

Max-Margin Deep Generative Models for (Semi-)Supervised Learning

1 code implementation22 Nov 2016 Chongxuan Li, Jun Zhu, Bo Zhang

Deep generative models (DGMs) are effective on learning multilayered representations of complex data and performing inference of input data by exploring the generative ability.

Missing Labels

Towards Better Analysis of Deep Convolutional Neural Networks

no code implementations24 Apr 2016 Mengchen Liu, Jiaxin Shi, Zhen Li, Chongxuan Li, Jun Zhu, Shixia Liu

Deep convolutional neural networks (CNNs) have achieved breakthrough performance in many pattern recognition tasks such as image classification.

Image Classification

Learning to Generate with Memory

1 code implementation24 Feb 2016 Chongxuan Li, Jun Zhu, Bo Zhang

Memory units have been widely used to enrich the capabilities of deep networks on capturing long-term dependencies in reasoning and prediction tasks, but little investigation exists on deep generative models (DGMs) which are good at inferring high-level invariant representations from unlabeled data.

Density Estimation Image Generation +2

Max-margin Deep Generative Models

2 code implementations NeurIPS 2015 Chongxuan Li, Jun Zhu, Tianlin Shi, Bo Zhang

Deep generative models (DGMs) are effective on learning multilayered representations of complex data and performing inference of input data by exploring the generative ability.

Cannot find the paper you are looking for? You can Submit a new open access paper.