no code implementations • 27 Nov 2023 • Yuxuan Duan, Jianfu Zhang, Liqing Zhang
Dataset distillation (DD) is a newly emerging research area aiming at alleviating the heavy computational load in training models on large datasets.
1 code implementation • 19 Aug 2023 • Bo Zhang, Yuxuan Duan, Jun Lan, Yan Hong, Huijia Zhu, Weiqiang Wang, Li Niu
To address these challenges, we propose a controllable image composition method that unifies four tasks in one diffusion model: image blending, image harmonization, view synthesis, and generative composition.
1 code implementation • 11 May 2023 • Yuxuan Duan, Li Niu, Yan Hong, Liqing Zhang
In this work, we introduce WeditGAN, which realizes model transfer by editing the intermediate latent codes $w$ in StyleGANs with learned constant offsets ($\Delta w$), discovering and constructing target latent spaces via simply relocating the distribution of source latent spaces.
no code implementations • 30 Mar 2023 • Chuer Yu, Xuhong Zhang, Yuxuan Duan, Senbo Yan, Zonghui Wang, Yang Xiang, Shouling Ji, Wenzhi Chen
We then visualize the identity loss between the test and the reference image from the image differences of the aligned pairs, and design a custom metric to quantify the identity loss.
no code implementations • 23 Mar 2023 • Yuxuan Duan, Xuhong Zhang, Chuer Yu, Zonghui Wang, Shouling Ji, Wenzhi Chen
We reflect this nature with the confusion of a face identification model and measure the confusion with the maximum value of the output probability distribution.
1 code implementation • 4 Mar 2023 • Yuxuan Duan, Yan Hong, Li Niu, Liqing Zhang
First, we train a data-efficient StyleGAN2 on defect-free images as the backbone.