Image Stylization
23 papers with code • 0 benchmarks • 0 datasets
Image stylization is a task that involves transforming an input image into a new image that has a different style, while preserving the content of the original image. The goal of image stylization is to create visually appealing images with a specific style or aesthetic, such as impressionism, cubism, or surrealism. It can also be used to make images more visually appealing for specific applications, such as social media or advertising.
Benchmarks
These leaderboards are used to track progress in Image Stylization
Most implemented papers
TileGAN: Synthesis of Large-Scale Non-Homogeneous Textures
We tackle the problem of texture synthesis in the setting where many input images are given and a large-scale output is required.
Stylization-Based Architecture for Fast Deep Exemplar Colorization
Exemplar-based colorization aims to add colors to a grayscale image guided by a content related reference im- age.
Foreground-Aware Stylization and Consensus Pseudo-Labeling for Domain Adaptation of First-Person Hand Segmentation
We validated our method on domain adaptation of hand segmentation from real and simulation images.
JoJoGAN: One Shot Face Stylization
The paired dataset is then used to fine-tune a StyleGAN.
Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning
Our framework consists of three key components, i. e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
DiffStyler: Controllable Dual Diffusion for Text-Driven Image Stylization
Despite the impressive results of arbitrary image-guided style transfer methods, text-driven image stylization has recently been proposed for transferring a natural image into a stylized one according to textual descriptions of the target style provided by the user.
Interactive Control over Temporal Consistency while Stylizing Video Streams
For stylization tasks, however, consistency control is an essential requirement as a certain amount of flickering adds to the artistic look and feel.
A Large-scale Film Style Dataset for Learning Multi-frequency Driven Film Enhancement
In order to facilitate film-based image stylization research, we construct FilmSet, a large-scale and high-quality film style dataset.
ReDi: Efficient Learning-Free Diffusion Inference via Trajectory Retrieval
Diffusion models show promising generation capability for a variety of data.
Instant Neural Radiance Fields Stylization
Our approach models a neural radiance field based on neural graphics primitives, which use a hash table-based position encoder for position embedding.