no code implementations • 5 Feb 2025 • Danah Yatim, Rafail Fridman, Omer Bar-Tal, Tali Dekel
We present a method for augmenting real-world videos with newly generated dynamic content.
1 code implementation • CVPR 2024 • Danah Yatim, Rafail Fridman, Omer Bar-Tal, Yoni Kasten, Tali Dekel
This loss guides the generation process to preserve the overall motion of the input video while complying with the target object in terms of shape and fine-grained motion traits.
no code implementations • NeurIPS 2023 • Rafail Fridman, Amit Abecasis, Yoni Kasten, Tali Dekel
We present a method for text-driven perpetual view generation -- synthesizing long-term videos of various scenes solely, given an input text prompt describing the scene and camera poses.
1 code implementation • 5 Apr 2022 • Omer Bar-Tal, Dolev Ofri-Amar, Rafail Fridman, Yoni Kasten, Tali Dekel
Given an input image or video and a target text prompt, our goal is to edit the appearance of existing objects (e. g., object's texture) or augment the scene with visual effects (e. g., smoke, fire) in a semantically meaningful manner.