Search Results for author: Yuval Alaluf

Found 22 papers, 12 papers with code

Piece it Together: Part-Based Concepting with IP-Priors

no code implementations13 Mar 2025 Elad Richardson, Kfir Goldberg, Yuval Alaluf, Daniel Cohen-Or

Advanced generative models excel at synthesizing images but often rely on text-based conditioning.

NeuralSVG: An Implicit Representation for Text-to-Vector Generation

no code implementations7 Jan 2025 Sagi Polaczek, Yuval Alaluf, Elad Richardson, Yael Vinker, Daniel Cohen-Or

We additionally demonstrate that utilizing a neural representation provides an added benefit of inference-time control, enabling users to dynamically adapt the generated SVG based on user-provided inputs, all with a single learned representation.

Vector Graphics

InstantRestore: Single-Step Personalized Face Restoration with Shared-Image Attention

no code implementations9 Dec 2024 Howard Zhang, Yuval Alaluf, Sizhuo Ma, Achuta Kadambi, Jian Wang, Kfir Aberman

Face image restoration aims to enhance degraded facial images while addressing challenges such as diverse degradation types, real-time processing demands, and, most crucially, the preservation of identity-specific features.

Image Restoration

ComfyGen: Prompt-Adaptive Workflows for Text-to-Image Generation

no code implementations2 Oct 2024 Rinon Gal, Adi Haviv, Yuval Alaluf, Amit H. Bermano, Daniel Cohen-Or, Gal Chechik

Both approaches lead to improved image quality when compared to monolithic models or generic, prompt-independent workflows.

Text-to-Image Generation

pOps: Photo-Inspired Diffusion Operators

no code implementations3 Jun 2024 Elad Richardson, Yuval Alaluf, Ali Mahdavi-Amiri, Daniel Cohen-Or

To harness this potential, we introduce pOps, a framework that trains specific semantic operators directly on CLIP image embeddings.

Image Generation

MyVLM: Personalizing VLMs for User-Specific Queries

no code implementations21 Mar 2024 Yuval Alaluf, Elad Richardson, Sergey Tulyakov, Kfir Aberman, Daniel Cohen-Or

To effectively recognize a variety of user-specific concepts, we augment the VLM with external concept heads that function as toggles for the model, enabling the VLM to identify the presence of specific target concepts in a given image.

Image Captioning Language Modelling +2

Cross-Image Attention for Zero-Shot Appearance Transfer

no code implementations6 Nov 2023 Yuval Alaluf, Daniel Garibi, Or Patashnik, Hadar Averbuch-Elor, Daniel Cohen-Or

Recent advancements in text-to-image generative models have demonstrated a remarkable ability to capture a deep semantic understanding of images.

Appearance Transfer Denoising

A Neural Space-Time Representation for Text-to-Image Personalization

1 code implementation24 May 2023 Yuval Alaluf, Elad Richardson, Gal Metzer, Daniel Cohen-Or

We observe that one can significantly improve the convergence and visual fidelity of the concept by introducing a textual bypass, where our neural mapper additionally outputs a residual that is added to the output of the text encoder.

Denoising

TEXTure: Text-Guided Texturing of 3D Shapes

2 code implementations3 Feb 2023 Elad Richardson, Gal Metzer, Yuval Alaluf, Raja Giryes, Daniel Cohen-Or

In this paper, we present TEXTure, a novel method for text-guided generation, editing, and transfer of textures for 3D shapes.

Image Generation text-guided-generation

Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models

2 code implementations31 Jan 2023 Hila Chefer, Yuval Alaluf, Yael Vinker, Lior Wolf, Daniel Cohen-Or

Recent text-to-image generative models have demonstrated an unparalleled ability to generate diverse and creative imagery guided by a target text prompt.

Generative Semantic Nursing

CLIPascene: Scene Sketching with Different Types and Levels of Abstraction

no code implementations ICCV 2023 Yael Vinker, Yuval Alaluf, Daniel Cohen-Or, Ariel Shamir

In this paper, we present a method for converting a given scene image into a sketch using different types and multiple levels of abstraction.

Disentanglement

An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion

9 code implementations2 Aug 2022 Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H. Bermano, Gal Chechik, Daniel Cohen-Or

Yet, it is unclear how such freedom can be exercised to generate images of specific unique concepts, modify their appearance, or compose them in new roles and novel scenes.

Personalized Image Generation Text-to-Image Generation

State-of-the-Art in the Architecture, Methods and Applications of StyleGAN

no code implementations28 Feb 2022 Amit H. Bermano, Rinon Gal, Yuval Alaluf, Ron Mokady, Yotam Nitzan, Omer Tov, Or Patashnik, Daniel Cohen-Or

Of these, StyleGAN offers a fascinating case study, owing to its remarkable visual quality and an ability to support a large array of downstream tasks.

Image Generation

Third Time's the Charm? Image and Video Editing with StyleGAN3

1 code implementation31 Jan 2022 Yuval Alaluf, Or Patashnik, Zongze Wu, Asif Zamir, Eli Shechtman, Dani Lischinski, Daniel Cohen-Or

In particular, we demonstrate that while StyleGAN3 can be trained on unaligned data, one can still use aligned data for training, without hindering the ability to generate unaligned imagery.

Disentanglement Image Generation +1

StyleFusion: A Generative Model for Disentangling Spatial Segments

1 code implementation15 Jul 2021 Omer Kafri, Or Patashnik, Yuval Alaluf, Daniel Cohen-Or

Inserting the resulting style code into a pre-trained StyleGAN generator results in a single harmonized image in which each semantic region is controlled by one of the input latent codes.

Disentanglement

ReStyle: A Residual-Based StyleGAN Encoder via Iterative Refinement

2 code implementations ICCV 2021 Yuval Alaluf, Or Patashnik, Daniel Cohen-Or

Instead of directly predicting the latent code of a given real image using a single pass, the encoder is tasked with predicting a residual with respect to the current estimate of the inverted latent code in a self-correcting manner.

Image Generation Real-to-Cartoon translation

Designing an Encoder for StyleGAN Image Manipulation

8 code implementations4 Feb 2021 Omer Tov, Yuval Alaluf, Yotam Nitzan, Or Patashnik, Daniel Cohen-Or

We then suggest two principles for designing encoders in a manner that allows one to control the proximity of the inversions to regions that StyleGAN was originally trained on.

Image Manipulation

Only a Matter of Style: Age Transformation Using a Style-Based Regression Model

2 code implementations4 Feb 2021 Yuval Alaluf, Or Patashnik, Daniel Cohen-Or

In this formulation, our method approaches the continuous aging process as a regression task between the input age and desired target age, providing fine-grained control over the generated image.

Face Age Editing Image Manipulation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.