Search Results for author: Yuval Alaluf

Found 17 papers, 12 papers with code

MyVLM: Personalizing VLMs for User-Specific Queries

no code implementations21 Mar 2024 Yuval Alaluf, Elad Richardson, Sergey Tulyakov, Kfir Aberman, Daniel Cohen-Or

To effectively recognize a variety of user-specific concepts, we augment the VLM with external concept heads that function as toggles for the model, enabling the VLM to identify the presence of specific target concepts in a given image.

Image Captioning Language Modelling +2

Breathing Life Into Sketches Using Text-to-Video Priors

no code implementations21 Nov 2023 Rinon Gal, Yael Vinker, Yuval Alaluf, Amit H. Bermano, Daniel Cohen-Or, Ariel Shamir, Gal Chechik

A sketch is one of the most intuitive and versatile tools humans use to convey their ideas visually.

Cross-Image Attention for Zero-Shot Appearance Transfer

no code implementations6 Nov 2023 Yuval Alaluf, Daniel Garibi, Or Patashnik, Hadar Averbuch-Elor, Daniel Cohen-Or

Recent advancements in text-to-image generative models have demonstrated a remarkable ability to capture a deep semantic understanding of images.

Denoising

A Neural Space-Time Representation for Text-to-Image Personalization

1 code implementation24 May 2023 Yuval Alaluf, Elad Richardson, Gal Metzer, Daniel Cohen-Or

We observe that one can significantly improve the convergence and visual fidelity of the concept by introducing a textual bypass, where our neural mapper additionally outputs a residual that is added to the output of the text encoder.

Denoising

TEXTure: Text-Guided Texturing of 3D Shapes

1 code implementation3 Feb 2023 Elad Richardson, Gal Metzer, Yuval Alaluf, Raja Giryes, Daniel Cohen-Or

In this paper, we present TEXTure, a novel method for text-guided generation, editing, and transfer of textures for 3D shapes.

Image Generation text-guided-generation

Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models

2 code implementations31 Jan 2023 Hila Chefer, Yuval Alaluf, Yael Vinker, Lior Wolf, Daniel Cohen-Or

Recent text-to-image generative models have demonstrated an unparalleled ability to generate diverse and creative imagery guided by a target text prompt.

Generative Semantic Nursing

CLIPascene: Scene Sketching with Different Types and Levels of Abstraction

no code implementations ICCV 2023 Yael Vinker, Yuval Alaluf, Daniel Cohen-Or, Ariel Shamir

In this paper, we present a method for converting a given scene image into a sketch using different types and multiple levels of abstraction.

Disentanglement

An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion

7 code implementations2 Aug 2022 Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H. Bermano, Gal Chechik, Daniel Cohen-Or

Yet, it is unclear how such freedom can be exercised to generate images of specific unique concepts, modify their appearance, or compose them in new roles and novel scenes.

Text-to-Image Generation

State-of-the-Art in the Architecture, Methods and Applications of StyleGAN

no code implementations28 Feb 2022 Amit H. Bermano, Rinon Gal, Yuval Alaluf, Ron Mokady, Yotam Nitzan, Omer Tov, Or Patashnik, Daniel Cohen-Or

Of these, StyleGAN offers a fascinating case study, owing to its remarkable visual quality and an ability to support a large array of downstream tasks.

Image Generation

Third Time's the Charm? Image and Video Editing with StyleGAN3

1 code implementation31 Jan 2022 Yuval Alaluf, Or Patashnik, Zongze Wu, Asif Zamir, Eli Shechtman, Dani Lischinski, Daniel Cohen-Or

In particular, we demonstrate that while StyleGAN3 can be trained on unaligned data, one can still use aligned data for training, without hindering the ability to generate unaligned imagery.

Disentanglement Image Generation +1

StyleFusion: A Generative Model for Disentangling Spatial Segments

1 code implementation15 Jul 2021 Omer Kafri, Or Patashnik, Yuval Alaluf, Daniel Cohen-Or

Inserting the resulting style code into a pre-trained StyleGAN generator results in a single harmonized image in which each semantic region is controlled by one of the input latent codes.

Disentanglement

ReStyle: A Residual-Based StyleGAN Encoder via Iterative Refinement

2 code implementations ICCV 2021 Yuval Alaluf, Or Patashnik, Daniel Cohen-Or

Instead of directly predicting the latent code of a given real image using a single pass, the encoder is tasked with predicting a residual with respect to the current estimate of the inverted latent code in a self-correcting manner.

Image Generation Real-to-Cartoon translation

Designing an Encoder for StyleGAN Image Manipulation

8 code implementations4 Feb 2021 Omer Tov, Yuval Alaluf, Yotam Nitzan, Or Patashnik, Daniel Cohen-Or

We then suggest two principles for designing encoders in a manner that allows one to control the proximity of the inversions to regions that StyleGAN was originally trained on.

Image Manipulation

Only a Matter of Style: Age Transformation Using a Style-Based Regression Model

2 code implementations4 Feb 2021 Yuval Alaluf, Or Patashnik, Daniel Cohen-Or

In this formulation, our method approaches the continuous aging process as a regression task between the input age and desired target age, providing fine-grained control over the generated image.

Face Age Editing Image Manipulation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.