Latent Neural Differential Equations for Video Generation

7 Nov 2020  ·  Cade Gordon, Natalie Parde ·

Generative Adversarial Networks have recently shown promise for video generation, building off of the success of image generation while also addressing a new challenge: time. Although time was analyzed in some early work, the literature has not adequately grown with temporal modeling developments. We study the effects of Neural Differential Equations to model the temporal dynamics of video generation. The paradigm of Neural Differential Equations presents many theoretical strengths including the first continuous representation of time within video generation. In order to address the effects of Neural Differential Equations, we investigate how changes in temporal models affect generated video quality. Our results give support to the usage of Neural Differential Equations as a simple replacement for older temporal generators. While keeping run times similar and decreasing parameter count, we produce a new state-of-the-art model in 64$\times$64 pixel unconditional video generation, with an Inception Score of 15.20.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Generation UCF-101 16 frames, 128x128, Unconditional TGANv2-ODE Inception Score 21.02 # 6
Video Generation UCF-101 16 frames, 64x64, Unconditional TGAN-ODE Inception Score 15.20 # 2
FID 26512 # 3
Video Generation UCF-101 16 frames, Unconditional, Single GPU TGANv2-ODE Inception Score 21.02 # 3

Methods


No methods listed for this paper. Add relevant methods here