Video Style Transfer
14 papers with code • 0 benchmarks • 0 datasets
These leaderboards are used to track progress in Video Style Transfer
These and our qualitative results ranging from small image patches to megapixel stylistic images and videos show that our approach better captures the subtle nature in which a style affects content.
Image style transfer models based on convolutional neural networks usually suffer from high temporal inconsistency when applied to videos.
Finally, the content feature is normalized so that they demonstrate the same local feature statistics as the calculated per-point weighted style feature statistics.
We present a method that decomposes, or "unwraps", an input video into a set of layered 2D atlases, each providing a unified representation of the appearance of an object (or background) over the video.
In this article, we address the problem by jointly considering the intrinsic properties of stylization and temporal consistency.
Although a series of successful portrait image toonification models built upon the powerful StyleGAN have been proposed, these image-oriented methods have obvious limitations when applied to videos, such as the fixed frame size, the requirement of face alignment, missing non-facial details and temporal inconsistency.
Content affinity loss including feature and pixel affinity is a main problem which leads to artifacts in photorealistic and video style transfer.