Style Transfer
650 papers with code • 2 benchmarks • 17 datasets
Style Transfer is a technique in computer vision and graphics that involves generating a new image by combining the content of one image with the style of another image. The goal of style transfer is to create an image that preserves the content of the original image while applying the visual style of another image.
( Image credit: A Neural Algorithm of Artistic Style )
Libraries
Use these libraries to find Style Transfer models and implementationsDatasets
Subtasks
Latest papers with no code
Improved Object-Based Style Transfer with Single Deep Network
This research paper proposes a novel methodology for image-to-image style transfer on objects utilizing a single deep convolutional neural network.
Tuning-Free Adaptive Style Incorporation for Structure-Consistent Text-Driven Style Transfer
In this work, we propose a novel solution to the text-driven style transfer task, namely, Adaptive Style Incorporation~(ASI), to achieve fine-grained feature-level style incorporation.
Stylizing Sparse-View 3D Scenes with Hierarchical Neural Representation
In this paper, we consider the stylization of sparse-view scenes in terms of disentangling content semantics and style textures.
StylizedGS: Controllable Stylization for 3D Gaussian Splatting
With the rapid development of XR, 3D generation and editing are becoming more and more important, among which, stylization is an important tool of 3D appearance editing.
Mitigating analytical variability in fMRI results with style transfer
We propose a novel approach to improve the reproducibility of neuroimaging results by converting statistic maps across different functional MRI pipelines.
MeshBrush: Painting the Anatomical Mesh with Neural Stylization for Endoscopy
We demonstrate that mesh stylization is a promising approach for creating realistic simulations for downstream tasks such as training and preoperative planning.
MultiParaDetox: Extending Text Detoxification with Parallel Data to New Languages
Text detoxification is a textual style transfer (TST) task where a text is paraphrased from a toxic surface form, e. g. featuring rude words, to the neutral register.
DiffStyler: Diffusion-based Localized Image Style Transfer
Image style transfer aims to imbue digital imagery with the distinctive attributes of style targets, such as colors, brushstrokes, shapes, whilst concurrently preserving the semantic integrity of the content.
Lift3D: Zero-Shot Lifting of Any 2D Vision Model to 3D
In recent years, there has been an explosion of 2D vision models for numerous tasks such as semantic segmentation, style transfer or scene editing, enabled by large-scale 2D image datasets.
AnyV2V: A Plug-and-Play Framework For Any Video-to-Video Editing Tasks
In the second stage, AnyV2V can plug in any existing image-to-video models to perform DDIM inversion and intermediate feature injection to maintain the appearance and motion consistency with the source video.