Video-to-Video Synthesis
8 papers with code • 2 benchmarks • 1 datasets
Latest papers with no code
MeshBrush: Painting the Anatomical Mesh with Neural Stylization for Endoscopy
We demonstrate that mesh stylization is a promising approach for creating realistic simulations for downstream tasks such as training and preoperative planning.
Translation-based Video-to-Video Synthesis
Translation-based Video Synthesis (TVS) has emerged as a vital research area in computer vision, aiming to facilitate the transformation of videos between distinct domains while preserving both temporal continuity and underlying content features.
FlowVid: Taming Imperfect Optical Flows for Consistent Video-to-Video Synthesis
This enables our model for video synthesis by editing the first frame with any prevalent I2I models and then propagating edits to successive frames.
Fairy: Fast Parallelized Instruction-Guided Video-to-Video Synthesis
In this paper, we introduce Fairy, a minimalist yet robust adaptation of image-editing diffusion models, enhancing them for video editing applications.
Unsupervised Action Localization Crop in Video Retargeting for 3D ConvNets
To corroborate the effectiveness of the proposed method, we evaluate the video classification task by comparing our dynamic cropping technique with random cropping on three benchmark datasets, viz.
World-Consistent Video-to-Video Synthesis
This is because they lack knowledge of the 3D world being rendered and generate each frame only based on the past few frames.
ReenactNet: Real-time Full Head Reenactment
Video-to-video synthesis is a challenging problem aiming at learning a translation function between a sequence of semantic maps and a photo-realistic video depicting the characteristics of a driving video.
Learning Joint Wasserstein Auto-Encoders for Joint Distribution Matching
We study the joint distribution matching problem which aims at learning bidirectional mappings to match the joint distribution of two domains.