VideoPoet: A Large Language Model for Zero-Shot Video Generation
We present VideoPoet, a language model capable of synthesizing high-quality video, with matching audio, from a large variety of conditioning signals. VideoPoet employs a decoder-only transformer architecture that processes multimodal inputs -- including images, videos, text, and audio. The training protocol follows that of Large Language Models (LLMs), consisting of two stages: pretraining and task-specific adaptation. During pretraining, VideoPoet incorporates a mixture of multimodal generative objectives within an autoregressive Transformer framework. The pretrained LLM serves as a foundation that can be adapted for a range of video generation tasks. We present empirical results demonstrating the model's state-of-the-art capabilities in zero-shot video generation, specifically highlighting VideoPoet's ability to generate high-fidelity motions. Project page: http://sites.research.google/videopoet/
PDF AbstractCode
Datasets
Task | Dataset | Model | Metric Name | Metric Value | Global Rank | Benchmark |
---|---|---|---|---|---|---|
Text-to-Video Generation | MSR-VTT | VideoPoet | CLIPSIM | 0.3123 | # 2 | |
FVD | 213 | # 4 | ||||
Video Generation | UCF-101 | VideoPoet (text-conditional) | Inception Score | 38.44 | # 21 | |
FVD16 | 355 | # 28 | ||||
Text-to-Video Generation | UCF-101 | VideoPoet | FVD16 | 355 | # 6 |