Image Stylization
23 papers with code • 0 benchmarks • 0 datasets
Image stylization is a task that involves transforming an input image into a new image that has a different style, while preserving the content of the original image. The goal of image stylization is to create visually appealing images with a specific style or aesthetic, such as impressionism, cubism, or surrealism. It can also be used to make images more visually appealing for specific applications, such as social media or advertising.
Benchmarks
These leaderboards are used to track progress in Image Stylization
Latest papers
One-Shot Structure-Aware Stylized Image Synthesis
While GAN-based models have been successful in image stylization tasks, they often struggle with structure preservation while stylizing a wide range of input images.
Pixel-Aware Stable Diffusion for Realistic Image Super-resolution and Personalized Stylization
Diffusion models have demonstrated impressive performance in various image generation, editing, enhancement and translation tasks.
Controlling Geometric Abstraction and Texture for Artistic Images
We present a novel method for the interactive control of geometric abstraction and texture in artistic images.
Instant Neural Radiance Fields Stylization
Our approach models a neural radiance field based on neural graphics primitives, which use a hash table-based position encoder for position embedding.
ReDi: Efficient Learning-Free Diffusion Inference via Trajectory Retrieval
Diffusion models show promising generation capability for a variety of data.
A Large-scale Film Style Dataset for Learning Multi-frequency Driven Film Enhancement
In order to facilitate film-based image stylization research, we construct FilmSet, a large-scale and high-quality film style dataset.
Interactive Control over Temporal Consistency while Stylizing Video Streams
For stylization tasks, however, consistency control is an essential requirement as a certain amount of flickering adds to the artistic look and feel.
EDICT: Exact Diffusion Inversion via Coupled Transformations
EDICT enables mathematically exact inversion of real and model-generated images by maintaining two coupled noise vectors which are used to invert each other in an alternating fashion.
DiffStyler: Controllable Dual Diffusion for Text-Driven Image Stylization
Despite the impressive results of arbitrary image-guided style transfer methods, text-driven image stylization has recently been proposed for transferring a natural image into a stylized one according to textual descriptions of the target style provided by the user.
WISE: Whitebox Image Stylization by Example-based Learning
Image-based artistic rendering can synthesize a variety of expressive styles using algorithmic image filtering.