single-image-generation
8 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in single-image-generation
Most implemented papers
Improved Techniques for Training Single-Image GANs
Recently there has been an interest in the potential of learning generative models from a single image, as opposed to from a large dataset.
PetsGAN: Rethinking Priors for Single Image Generation
Moreover, we apply our method to other image manipulation tasks (e. g., style transfer, harmonization), and the results further prove the effectiveness and efficiency of our method.
Meta Internal Learning
Internal learning for single-image generation is a framework, where a generator is trained to produce novel images based on a single image.
A Patch-Based Algorithm for Diverse and High Fidelity Single Image Generation
However, Shaham et al. [1] recently proposed the SinGAN method, which achieves this generation using a single image example.
SinDDM: A Single Image Denoising Diffusion Model
Here, we introduce a framework for training a DDM on a single image.
Towards Smooth Video Composition
Video generation requires synthesizing consistent and persistent frames with dynamic content over time.
AutoDiffusion: Training-Free Optimization of Time Steps and Architectures for Automated Diffusion Model Acceleration
Therefore, we propose to search the optimal time steps sequence and compressed model architecture in a unified framework to achieve effective image generation for diffusion models without any further training.
Latent Feature and Attention Dual Erasure Attack against Multi-View Diffusion Models for 3D Assets Protection
This paper is the first to address the intellectual property infringement issue arising from MVDMs.