Video Style Transfer
14 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Video Style Transfer
Latest papers with no code
LocalStyleFool: Regional Video Style Transfer Attack Using Segment Anything Model
Benefiting from the popularity and scalably usability of Segment Anything Model (SAM), we first extract different regions according to semantic information and then track them through the video stream to maintain the temporal consistency.
ColoristaNet for Photorealistic Video Style Transfer
The style removal network removes the original image styles, and the style restoration network recovers image styles in a supervised manner.
Stylizing 3D Scene via Implicit Representation and HyperNetwork
Our framework consists of two components: an implicit representation of the 3D scene with the neural radiance fields model, and a hypernetwork to transfer the style information into the scene representation.
Real-time Localized Photorealistic Video Style Transfer
We present a novel algorithm for transferring artistic styles of semantically meaningful local regions of an image onto local regions of a target video while preserving its photorealism.
Arbitrary Video Style Transfer via Multi-Channel Correlation
Towards this end, we propose Multi-Channel Correction network (MCCNet), which can be trained to fuse the exemplar style features and input content features for efficient style transfer while naturally maintaining the coherence of input videos.
Optical Flow Distillation: Towards Efficient and Stable Video Style Transfer
This paper proposes to learn a lightweight video style transfer network via knowledge distillation paradigm.
MVStylizer: An Efficient Edge-Assisted Video Photorealistic Style Transfer System for Mobile Phones
Instead of performing stylization frame by frame, only key frames in the original video are processed by a pre-trained deep neural network (DNN) on edge servers, while the rest of stylized intermediate frames are generated by our designed optical-flow-based frame interpolation algorithm on mobile phones.
HyperCon: Image-To-Video Model Transfer for Video-To-Video Translation Tasks
To combine the benefits of image and video models, we propose an image-to-video model transfer method called Hyperconsistency (HyperCon) that transforms any well-trained image model into a temporally consistent video model without fine-tuning.
Learning Linear Transformations for Fast Image and Video Style Transfer
Given a random pair of images, a universal style transfer method extracts the feel from a reference image to synthesize an output based on the look of a content image.
A Flexible Convolutional Solver for Fast Style Transfers
We propose a new flexible deep convolutional neural network (convnet) to perform fast neural style transfers.