Search Results for author: Omer Tov

Found 10 papers, 5 papers with code

TokenVerse: Versatile Multi-concept Personalization in Token Modulation Space

no code implementations21 Jan 2025 Daniel Garibi, Shahar Yadin, Roni Paiss, Omer Tov, Shiran Zada, Ariel Ephrat, Tomer Michaeli, Inbar Mosseri, Tali Dekel

Building on this insight, we devise an optimization-based framework that takes as input an image and a text description, and finds for each word a distinct direction in the modulation space.

Still-Moving: Customized Video Generation without Customized Video Data

no code implementations11 Jul 2024 Hila Chefer, Shiran Zada, Roni Paiss, Ariel Ephrat, Omer Tov, Michael Rubinstein, Lior Wolf, Tali Dekel, Tomer Michaeli, Inbar Mosseri

We assume access to a customized version of the T2I model, trained only on still image data (e. g., using DreamBooth or StyleDrop).

Video Generation

Inflation with Diffusion: Efficient Temporal Adaptation for Text-to-Video Super-Resolution

no code implementations18 Jan 2024 Xin Yuan, Jinoo Baek, Keyang Xu, Omer Tov, Hongliang Fei

We propose an efficient diffusion-based text-to-video super-resolution (SR) tuning approach that leverages the readily learned capacity of pixel level image diffusion model to capture spatial information for video generation.

Video Generation Video Super-Resolution

Teaching CLIP to Count to Ten

1 code implementation ICCV 2023 Roni Paiss, Ariel Ephrat, Omer Tov, Shiran Zada, Inbar Mosseri, Michal Irani, Tali Dekel

Our counting loss is deployed over automatically-created counterfactual examples, each consisting of an image and a caption containing an incorrect object count.

counterfactual Image Retrieval +4

Imagic: Text-Based Real Image Editing with Diffusion Models

no code implementations CVPR 2023 Bahjat Kawar, Shiran Zada, Oran Lang, Omer Tov, Huiwen Chang, Tali Dekel, Inbar Mosseri, Michal Irani

In this paper we demonstrate, for the very first time, the ability to apply complex (e. g., non-rigid) text-guided semantic edits to a single real image.

Style Transfer

State-of-the-Art in the Architecture, Methods and Applications of StyleGAN

no code implementations28 Feb 2022 Amit H. Bermano, Rinon Gal, Yuval Alaluf, Ron Mokady, Yotam Nitzan, Omer Tov, Or Patashnik, Daniel Cohen-Or

Of these, StyleGAN offers a fascinating case study, owing to its remarkable visual quality and an ability to support a large array of downstream tasks.

Image Generation

Self-Distilled StyleGAN: Towards Generation from Internet Photos

2 code implementations24 Feb 2022 Ron Mokady, Michal Yarom, Omer Tov, Oran Lang, Daniel Cohen-Or, Tali Dekel, Michal Irani, Inbar Mosseri

To meet these challenges, we proposed a StyleGAN-based self-distillation approach, which consists of two main components: (i) A generative-based self-filtering of the dataset to eliminate outlier images, in order to generate an adequate training set, and (ii) Perceptual clustering of the generated images to detect the inherent data modalities, which are then employed to improve StyleGAN's "truncation trick" in the image synthesis process.

Image Generation

Designing an Encoder for StyleGAN Image Manipulation

8 code implementations4 Feb 2021 Omer Tov, Yuval Alaluf, Yotam Nitzan, Or Patashnik, Daniel Cohen-Or

We then suggest two principles for designing encoders in a manner that allows one to control the proximity of the inversions to regions that StyleGAN was originally trained on.

Image Manipulation

Cannot find the paper you are looking for? You can Submit a new open access paper.