Search Results for author: Weijian Luo

Found 8 papers, 2 papers with code

Purify++: Improving Diffusion-Purification with Advanced Diffusion Models and Control of Randomness

no code implementations28 Oct 2023 Boya Zhang, Weijian Luo, Zhihua Zhang

Based on our findings, we propose Purify++, a new diffusion purification algorithm that is now the state-of-the-art purification method against several adversarial attacks.

SA-Solver: Stochastic Adams Solver for Fast Sampling of Diffusion Models

1 code implementation NeurIPS 2023 Shuchen Xue, Mingyang Yi, Weijian Luo, Shifeng Zhang, Jiacheng Sun, Zhenguo Li, Zhi-Ming Ma

Based on our analysis, we propose SA-Solver, which is an improved efficient stochastic Adams method for solving diffusion SDE to generate data with high quality.

Image Generation

Training Energy-Based Models with Diffusion Contrastive Divergences

no code implementations4 Jul 2023 Weijian Luo, Hao Jiang, Tianyang Hu, Jiacheng Sun, Zhenguo Li, Zhihua Zhang

In image generation experiments, the proposed DCD is capable of training an energy-based model for generating the Celab-A $32\times 32$ dataset, which is comparable to existing EBMs.

Image Denoising Image Generation

Entropy-based Training Methods for Scalable Neural Implicit Sampler

no code implementations8 Jun 2023 Weijian Luo, Boya Zhang, Zhihua Zhang

These benchmarks include sampling from 2D targets, Bayesian inference, and sampling from high-dimensional energy-based models (EBMs).

Bayesian Inference

Diff-Instruct: A Universal Approach for Transferring Knowledge From Pre-trained Diffusion Models

1 code implementation NeurIPS 2023 Weijian Luo, Tianyang Hu, Shifeng Zhang, Jiacheng Sun, Zhenguo Li, Zhihua Zhang

To demonstrate the effectiveness and universality of Diff-Instruct, we consider two scenarios: distilling pre-trained diffusion models and refining existing GAN models.

A Comprehensive Survey on Knowledge Distillation of Diffusion Models

no code implementations9 Apr 2023 Weijian Luo

Our objective is to provide a comprehensible overview of the modern approaches for distilling DMs, starting with an introduction to DMs and a discussion of the challenges involved in distilling them into neural vector fields.

Knowledge Distillation

ABM: an automatic supervised feature engineering method for loss based models based on group and fused lasso

no code implementations22 Sep 2020 Weijian Luo, Yongxian Long

A vital problem in solving classification or regression problem is to apply feature engineering and variable selection on data before fed into models. One of a most popular feature engineering method is to discretisize continous variable with some cutting points, which is refered to as bining processing. Good cutting points are important for improving model's ability, because wonderful bining may ignore some noisy variance in continous variable range and keep useful leveled information with good ordered encodings. However, to our best knowledge a majority of cutting point selection is done via researchers domain knownledge or some naive methods like equal-width cutting or equal-frequency cutting. In this paper we propose an end-to-end supervised cutting point selection method based on group and fused lasso along with the automatically variable selection effect. We name our method \textbf{ABM}(automatic bining machine).

Feature Engineering Variable Selection

Cannot find the paper you are looking for? You can Submit a new open access paper.