Search Results for author: Rafail Fridman

Found 3 papers, 1 papers with code

Space-Time Diffusion Features for Zero-Shot Text-Driven Motion Transfer

no code implementations28 Nov 2023 Danah Yatim, Rafail Fridman, Omer Bar-Tal, Yoni Kasten, Tali Dekel

This loss guides the generation process to preserve the overall motion of the input video while complying with the target object in terms of shape and fine-grained motion traits.

SceneScape: Text-Driven Consistent Scene Generation

no code implementations NeurIPS 2023 Rafail Fridman, Amit Abecasis, Yoni Kasten, Tali Dekel

We present a method for text-driven perpetual view generation -- synthesizing long-term videos of various scenes solely, given an input text prompt describing the scene and camera poses.

Depth Estimation Depth Prediction +3

Text2LIVE: Text-Driven Layered Image and Video Editing

1 code implementation5 Apr 2022 Omer Bar-Tal, Dolev Ofri-Amar, Rafail Fridman, Yoni Kasten, Tali Dekel

Given an input image or video and a target text prompt, our goal is to edit the appearance of existing objects (e. g., object's texture) or augment the scene with visual effects (e. g., smoke, fire) in a semantically meaningful manner.

Video Editing

Cannot find the paper you are looking for? You can Submit a new open access paper.