Search Results for author: Omer Bar-Tal

Found 8 papers, 4 papers with code

Space-Time Diffusion Features for Zero-Shot Text-Driven Motion Transfer

no code implementations28 Nov 2023 Danah Yatim, Rafail Fridman, Omer Bar-Tal, Yoni Kasten, Tali Dekel

This loss guides the generation process to preserve the overall motion of the input video while complying with the target object in terms of shape and fine-grained motion traits.

Disentangling Structure and Appearance in ViT Feature Space

no code implementations20 Nov 2023 Narek Tumanyan, Omer Bar-Tal, Shir Amir, Shai Bagon, Tali Dekel

Specifically, our goal is to generate an image in which objects in a source structure image are "painted" with the visual appearance of their semantically related objects in a target appearance image.

Semantic Segmentation

TokenFlow: Consistent Diffusion Features for Consistent Video Editing

1 code implementation19 Jul 2023 Michal Geyer, Omer Bar-Tal, Shai Bagon, Tali Dekel

In this work, we present a framework that harnesses the power of a text-to-image diffusion model for the task of text-driven video editing.

Video Editing

MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation

2 code implementations16 Feb 2023 Omer Bar-Tal, Lior Yariv, Yaron Lipman, Tali Dekel

In this work, we present MultiDiffusion, a unified framework that enables versatile and controllable image generation, using a pre-trained text-to-image diffusion model, without any further training or finetuning.

Text-to-Image Generation

Text2LIVE: Text-Driven Layered Image and Video Editing

1 code implementation5 Apr 2022 Omer Bar-Tal, Dolev Ofri-Amar, Rafail Fridman, Yoni Kasten, Tali Dekel

Given an input image or video and a target text prompt, our goal is to edit the appearance of existing objects (e. g., object's texture) or augment the scene with visual effects (e. g., smoke, fire) in a semantically meaningful manner.

Video Editing

Splicing ViT Features for Semantic Appearance Transfer

1 code implementation CVPR 2022 Narek Tumanyan, Omer Bar-Tal, Shai Bagon, Tali Dekel

Specifically, our goal is to generate an image in which objects in a source structure image are "painted" with the visual appearance of their semantically related objects in a target appearance image.

Image Generation Style Transfer

Cannot find the paper you are looking for? You can Submit a new open access paper.