Appearance Transfer
12 papers with code • 0 benchmarks • 0 datasets
Transfer of the appearance from one object to another.
Benchmarks
These leaderboards are used to track progress in Appearance Transfer
Most implemented papers
Unsupervised Part-Based Disentangling of Object Shape and Appearance
Large intra-class variation is the result of changes in multiple object characteristics.
Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis
In this paper, we propose to use a 3D body mesh recovery module to disentangle the pose and shape, which can not only model the joint location and rotation but also characterize the personalized body shape.
Liquid Warping GAN with Attention: A Unified Framework for Human Image Synthesis
Also, we build a new dataset, namely iPER dataset, for the evaluation of human motion imitation, appearance transfer, and novel view synthesis.
Neural Crossbreed: Neural Based Image Metamorphosis
We propose Neural Crossbreed, a feed-forward neural network that can learn a semantic change of input images in a latent space to create the morphing effect.
Few-Shot Human Motion Transfer by Personalized Geometry and Texture Modeling
We present a new method for few-shot human motion transfer that achieves realistic human image generation with only a small number of appearance inputs.
Splicing ViT Features for Semantic Appearance Transfer
Specifically, our goal is to generate an image in which objects in a source structure image are "painted" with the visual appearance of their semantically related objects in a target appearance image.
Representation Learning for Visual Object Tracking by Masked Appearance Transfer
However, for the template, we make the decoder reconstruct the target appearance within the search region.
Seeing is not Believing: An Identity Hider for Human Vision Privacy Protection
Concretely, the identity hider benefits from two specially designed modules: 1) The virtual face generation module generates a virtual face with a new appearance by manipulating the latent space of StyleGAN2.
Fine-grained Appearance Transfer with Diffusion Models
A pivotal aspect of our approach is the strategic use of the predicted $x_0$ space by diffusion models within the latent space of diffusion processes.
Unified Diffusion-Based Rigid and Non-Rigid Editing with Text and Image Guidance
Existing text-to-image editing methods tend to excel either in rigid or non-rigid editing but encounter challenges when combining both, resulting in misaligned outputs with the provided text prompts.