Video Style Transfer

14 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Latest papers with no code

LocalStyleFool: Regional Video Style Transfer Attack Using Segment Anything Model

no code yet • 18 Mar 2024

Benefiting from the popularity and scalably usability of Segment Anything Model (SAM), we first extract different regions according to semantic information and then track them through the video stream to maintain the temporal consistency.

ColoristaNet for Photorealistic Video Style Transfer

no code yet • 19 Dec 2022

The style removal network removes the original image styles, and the style restoration network recovers image styles in a supervised manner.

Stylizing 3D Scene via Implicit Representation and HyperNetwork

no code yet • 27 May 2021

Our framework consists of two components: an implicit representation of the 3D scene with the neural radiance fields model, and a hypernetwork to transfer the style information into the scene representation.

Real-time Localized Photorealistic Video Style Transfer

no code yet • 20 Oct 2020

We present a novel algorithm for transferring artistic styles of semantically meaningful local regions of an image onto local regions of a target video while preserving its photorealism.

Arbitrary Video Style Transfer via Multi-Channel Correlation

no code yet • 17 Sep 2020

Towards this end, we propose Multi-Channel Correction network (MCCNet), which can be trained to fuse the exemplar style features and input content features for efficient style transfer while naturally maintaining the coherence of input videos.

Optical Flow Distillation: Towards Efficient and Stable Video Style Transfer

no code yet • ECCV 2020

This paper proposes to learn a lightweight video style transfer network via knowledge distillation paradigm.

MVStylizer: An Efficient Edge-Assisted Video Photorealistic Style Transfer System for Mobile Phones

no code yet • 24 May 2020

Instead of performing stylization frame by frame, only key frames in the original video are processed by a pre-trained deep neural network (DNN) on edge servers, while the rest of stylized intermediate frames are generated by our designed optical-flow-based frame interpolation algorithm on mobile phones.

HyperCon: Image-To-Video Model Transfer for Video-To-Video Translation Tasks

no code yet • 10 Dec 2019

To combine the benefits of image and video models, we propose an image-to-video model transfer method called Hyperconsistency (HyperCon) that transforms any well-trained image model into a temporally consistent video model without fine-tuning.

Learning Linear Transformations for Fast Image and Video Style Transfer

no code yet • CVPR 2019

Given a random pair of images, a universal style transfer method extracts the feel from a reference image to synthesize an output based on the look of a content image.

A Flexible Convolutional Solver for Fast Style Transfers

no code yet • CVPR 2019

We propose a new flexible deep convolutional neural network (convnet) to perform fast neural style transfers.