4 papers with code • 0 benchmarks • 0 datasets
These and our qualitative results ranging from small image patches to megapixel stylistic images and videos show that our approach better captures the subtle nature in which a style affects content.
Image style transfer models based on convolutional neural networks usually suffer from high temporal inconsistency when applied to videos.
We present the Creative Flow+ Dataset, the first diverse multi-style artistic video dataset richly labeled with per-pixel optical flow, occlusions, correspondences, segmentation labels, normals, and depth.
In this article, we address the problem by jointly considering the intrinsic properties of stylization and temporal consistency.