Unconditional Video Generation
8 papers with code • 1 benchmarks • 1 datasets
Experimental results demonstrate that our method achieves new state-of-the-art performance on five challenging benchmarks for video prediction and unconditional video generation: BAIR, RoboNet, KTH, KITTI and UCF101.
We propose a new object-centric video prediction algorithm based on the deep latent particle (DLP) representation.
Generative Adversarial Networks have recently shown promise for video generation, building off of the success of image generation while also addressing a new challenge: time.
Large-scale datasets have played indispensable roles in the recent success of face generation/editing and significantly facilitated the advances of emerging research fields.
We present MotionVideoGAN, a novel video generator synthesizing videos based on the motion space learned by pre-trained image pair generators.
We construct a local-global context guidance strategy to capture the multi-perceptual embedding of the past fragment to boost the consistency of future prediction.
In this paper, we introduce a novel motion generator design that uses a learning-based inversion network for GAN.