Search Results for author: Or Patashnik

Found 20 papers, 12 papers with code

Be Yourself: Bounded Attention for Multi-Subject Text-to-Image Generation

no code implementations25 Mar 2024 Omer Dahary, Or Patashnik, Kfir Aberman, Daniel Cohen-Or

Text-to-image diffusion models have an unprecedented ability to generate diverse and high-quality images.

Denoising Text-to-Image Generation

ReNoise: Real Image Inversion Through Iterative Noising

no code implementations21 Mar 2024 Daniel Garibi, Or Patashnik, Andrey Voynov, Hadar Averbuch-Elor, Daniel Cohen-Or

However, applying these methods to real images necessitates the inversion of the images into the domain of the pretrained diffusion model.

Denoising Image Manipulation

Consolidating Attention Features for Multi-view Image Editing

no code implementations22 Feb 2024 Or Patashnik, Rinon Gal, Daniel Cohen-Or, Jun-Yan Zhu, Fernando de la Torre

In this work, we focus on spatial control-based geometric manipulations and introduce a method to consolidate the editing process across various views.

CLiC: Concept Learning in Context

no code implementations28 Nov 2023 Mehdi Safaee, Aryan Mikaeili, Or Patashnik, Daniel Cohen-Or, Ali Mahdavi-Amiri

This paper addresses the challenge of learning a local visual pattern of an object from one image, and generating images depicting objects with that pattern.

Object

Cross-Image Attention for Zero-Shot Appearance Transfer

no code implementations6 Nov 2023 Yuval Alaluf, Daniel Garibi, Or Patashnik, Hadar Averbuch-Elor, Daniel Cohen-Or

Recent advancements in text-to-image generative models have demonstrated a remarkable ability to capture a deep semantic understanding of images.

Denoising

Noise-Free Score Distillation

no code implementations26 Oct 2023 Oren Katzir, Or Patashnik, Daniel Cohen-Or, Dani Lischinski

Score Distillation Sampling (SDS) has emerged as the de facto approach for text-to-content generation in non-image domains.

Localizing Object-level Shape Variations with Text-to-Image Diffusion Models

1 code implementation ICCV 2023 Or Patashnik, Daniel Garibi, Idan Azuri, Hadar Averbuch-Elor, Daniel Cohen-Or

In this paper, we present a technique to generate a collection of images that depicts variations in the shape of a specific object, enabling an object-level shape exploration process.

Denoising Object +1

An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion

7 code implementations2 Aug 2022 Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H. Bermano, Gal Chechik, Daniel Cohen-Or

Yet, it is unclear how such freedom can be exercised to generate images of specific unique concepts, modify their appearance, or compose them in new roles and novel scenes.

Text-to-Image Generation

State-of-the-Art in the Architecture, Methods and Applications of StyleGAN

no code implementations28 Feb 2022 Amit H. Bermano, Rinon Gal, Yuval Alaluf, Ron Mokady, Yotam Nitzan, Omer Tov, Or Patashnik, Daniel Cohen-Or

Of these, StyleGAN offers a fascinating case study, owing to its remarkable visual quality and an ability to support a large array of downstream tasks.

Image Generation

FEAT: Face Editing with Attention

no code implementations6 Feb 2022 Xianxu Hou, Linlin Shen, Or Patashnik, Daniel Cohen-Or, Hui Huang

In this paper, we build on the StyleGAN generator, and present a method that explicitly encourages face manipulation to focus on the intended regions by incorporating learned attention maps.

Disentanglement

Third Time's the Charm? Image and Video Editing with StyleGAN3

1 code implementation31 Jan 2022 Yuval Alaluf, Or Patashnik, Zongze Wu, Asif Zamir, Eli Shechtman, Dani Lischinski, Daniel Cohen-Or

In particular, we demonstrate that while StyleGAN3 can be trained on unaligned data, one can still use aligned data for training, without hindering the ability to generate unaligned imagery.

Disentanglement Image Generation +1

StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators

3 code implementations2 Aug 2021 Rinon Gal, Or Patashnik, Haggai Maron, Gal Chechik, Daniel Cohen-Or

Can a generative model be trained to produce images from a specific domain, guided by a text prompt only, without seeing any image?

Domain Adaptation Image Manipulation

StyleFusion: A Generative Model for Disentangling Spatial Segments

1 code implementation15 Jul 2021 Omer Kafri, Or Patashnik, Yuval Alaluf, Daniel Cohen-Or

Inserting the resulting style code into a pre-trained StyleGAN generator results in a single harmonized image in which each semantic region is controlled by one of the input latent codes.

Disentanglement

ReStyle: A Residual-Based StyleGAN Encoder via Iterative Refinement

2 code implementations ICCV 2021 Yuval Alaluf, Or Patashnik, Daniel Cohen-Or

Instead of directly predicting the latent code of a given real image using a single pass, the encoder is tasked with predicting a residual with respect to the current estimate of the inverted latent code in a self-correcting manner.

Image Generation Real-to-Cartoon translation

StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery

5 code implementations ICCV 2021 Or Patashnik, Zongze Wu, Eli Shechtman, Daniel Cohen-Or, Dani Lischinski

Inspired by the ability of StyleGAN to generate highly realistic images in a variety of domains, much recent work has focused on understanding how to use the latent spaces of StyleGAN to manipulate generated and real images.

Image Manipulation

Designing an Encoder for StyleGAN Image Manipulation

7 code implementations4 Feb 2021 Omer Tov, Yuval Alaluf, Yotam Nitzan, Or Patashnik, Daniel Cohen-Or

We then suggest two principles for designing encoders in a manner that allows one to control the proximity of the inversions to regions that StyleGAN was originally trained on.

Image Manipulation

Only a Matter of Style: Age Transformation Using a Style-Based Regression Model

2 code implementations4 Feb 2021 Yuval Alaluf, Or Patashnik, Daniel Cohen-Or

In this formulation, our method approaches the continuous aging process as a regression task between the input age and desired target age, providing fine-grained control over the generated image.

Face Age Editing Image Manipulation +2

BalaGAN: Image Translation Between Imbalanced Domains via Cross-Modal Transfer

1 code implementation5 Oct 2020 Or Patashnik, Dov Danon, Hao Zhang, Daniel Cohen-Or

State-of-the-art image-to-image translation methods tend to struggle in an imbalanced domain setting, where one image domain lacks richness and diversity.

Image-to-Image Translation Style Transfer +1

Cannot find the paper you are looking for? You can Submit a new open access paper.