Style Transfer

650 papers with code • 2 benchmarks • 17 datasets

Style Transfer is a technique in computer vision and graphics that involves generating a new image by combining the content of one image with the style of another image. The goal of style transfer is to create an image that preserves the content of the original image while applying the visual style of another image.

( Image credit: A Neural Algorithm of Artistic Style )

Libraries

Use these libraries to find Style Transfer models and implementations

Latest papers with no code

MeshBrush: Painting the Anatomical Mesh with Neural Stylization for Endoscopy

no code yet • 3 Apr 2024

We demonstrate that mesh stylization is a promising approach for creating realistic simulations for downstream tasks such as training and preoperative planning.

MultiParaDetox: Extending Text Detoxification with Parallel Data to New Languages

no code yet • 2 Apr 2024

Text detoxification is a textual style transfer (TST) task where a text is paraphrased from a toxic surface form, e. g. featuring rude words, to the neutral register.

DiffStyler: Diffusion-based Localized Image Style Transfer

no code yet • 27 Mar 2024

Image style transfer aims to imbue digital imagery with the distinctive attributes of style targets, such as colors, brushstrokes, shapes, whilst concurrently preserving the semantic integrity of the content.

Lift3D: Zero-Shot Lifting of Any 2D Vision Model to 3D

no code yet • 27 Mar 2024

In recent years, there has been an explosion of 2D vision models for numerous tasks such as semantic segmentation, style transfer or scene editing, enabled by large-scale 2D image datasets.

AnyV2V: A Plug-and-Play Framework For Any Video-to-Video Editing Tasks

no code yet • 21 Mar 2024

In the second stage, AnyV2V can plug in any existing image-to-video models to perform DDIM inversion and intermediate feature injection to maintain the appearance and motion consistency with the source video.

Implicit Style-Content Separation using B-LoRA

no code yet • 21 Mar 2024

In this paper, we introduce B-LoRA, a method that leverages LoRA (Low-Rank Adaptation) to implicitly separate the style and content components of a single image, facilitating various image stylization tasks.

Diffusion Attack: Leveraging Stable Diffusion for Naturalistic Image Attacking

no code yet • 21 Mar 2024

In Virtual Reality (VR), adversarial attack remains a significant security threat.

Enhancing Fingerprint Image Synthesis with GANs, Diffusion Models, and Style Transfer Techniques

no code yet • 20 Mar 2024

The comparable WGAN-GP model achieved slightly higher FID while performing better in the uniqueness assessment due to a slightly lower FAR when matched against the training data, indicating better creativity.

LocalStyleFool: Regional Video Style Transfer Attack Using Segment Anything Model

no code yet • 18 Mar 2024

Benefiting from the popularity and scalably usability of Segment Anything Model (SAM), we first extract different regions according to semantic information and then track them through the video stream to maintain the temporal consistency.

LayerDiff: Exploring Text-guided Multi-layered Composable Image Synthesis via Layer-Collaborative Diffusion Model

no code yet • 18 Mar 2024

Specifically, an inter-layer attention module is designed to encourage information exchange and learning between layers, while a text-guided intra-layer attention module incorporates layer-specific prompts to direct the specific-content generation for each layer.