About

Learning a mapping function from an input source video to an output video.

( Image credit: vid2vid )

Benchmarks

TREND DATASET BEST METHOD PAPER TITLE PAPER CODE COMPARE

Greatest papers with code

Video-to-Video Synthesis

NeurIPS 2018 NVIDIA/vid2vid

We study the problem of video-to-video synthesis, whose goal is to learn a mapping function from an input source video (e. g., a sequence of semantic segmentation masks) to an output photorealistic video that precisely depicts the content of the source video.

SEMANTIC SEGMENTATION VIDEO PREDICTION VIDEO-TO-VIDEO SYNTHESIS

Few-shot Video-to-Video Synthesis

NeurIPS 2019 NVlabs/few-shot-vid2vid

To address the limitations, we propose a few-shot vid2vid framework, which learns to synthesize videos of previously unseen subjects or scenes by leveraging few example images of the target at test time.

VIDEO-TO-VIDEO SYNTHESIS

Deep Blind Video Decaptioning by Temporal Aggregation and Recurrence

CVPR 2019 shwoo93/video_decaptioning

Blind video decaptioning is a problem of automatically removing text overlays and inpainting the occluded parts in videos without any input masks.

VIDEO DENOISING VIDEO INPAINTING VIDEO-TO-VIDEO SYNTHESIS

GANs in computer vision ebook

ebook 2020 The-AI-Summer/GANs-in-Computer-Vision

We do hope that this series will provide you a big overview of the field, so that you will not need to read all the literature by yourself, independent of your background on GANs.

CONDITIONAL IMAGE GENERATION IMAGE-TO-IMAGE TRANSLATION VIDEO GENERATION VIDEO-TO-VIDEO SYNTHESIS

Compositional Video Synthesis with Action Graphs

27 Jun 2020roeiherz/AG2Video

Videos of actions are complex signals, containing rich compositional structure.

VIDEO GENERATION VIDEO PREDICTION VIDEO-TO-VIDEO SYNTHESIS