Search Results for author: Rinon Gal

Found 17 papers, 10 papers with code

An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion

7 code implementations2 Aug 2022 Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H. Bermano, Gal Chechik, Daniel Cohen-Or

Yet, it is unclear how such freedom can be exercised to generate images of specific unique concepts, modify their appearance, or compose them in new roles and novel scenes.

Text-to-Image Generation

Stitch it in Time: GAN-Based Facial Editing of Real Videos

1 code implementation20 Jan 2022 Rotem Tzaban, Ron Mokady, Rinon Gal, Amit H. Bermano, Daniel Cohen-Or

The ability of Generative Adversarial Networks to encode rich semantics within their latent space has been widely adopted for facial image editing.

Facial Editing

StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators

3 code implementations2 Aug 2021 Rinon Gal, Or Patashnik, Haggai Maron, Gal Chechik, Daniel Cohen-Or

Can a generative model be trained to produce images from a specific domain, guided by a text prompt only, without seeing any image?

Domain Adaptation Image Manipulation

LARGE: Latent-Based Regression through GAN Semantics

1 code implementation CVPR 2022 Yotam Nitzan, Rinon Gal, Ofir Brenner, Daniel Cohen-Or

For modern generative frameworks, this semantic encoding manifests as smooth, linear directions which affect image attributes in a disentangled manner.

Attribute regression

SWAGAN: A Style-based Wavelet-driven Generative Model

2 code implementations11 Feb 2021 Rinon Gal, Dana Cohen, Amit Bermano, Daniel Cohen-Or

In recent years, considerable progress has been made in the visual quality of Generative Adversarial Networks (GANs).

Image Generation

"This is my unicorn, Fluffy": Personalizing frozen vision-language representations

2 code implementations4 Apr 2022 Niv Cohen, Rinon Gal, Eli A. Meirom, Gal Chechik, Yuval Atzmon

We propose an architecture for solving PerVL that operates by extending the input vocabulary of a pretrained model with new word embeddings for the new personalized concepts.

Image Retrieval Retrieval +5

Self-Conditioned Generative Adversarial Networks for Image Editing

1 code implementation8 Feb 2022 Yunzhe Liu, Rinon Gal, Amit H. Bermano, Baoquan Chen, Daniel Cohen-Or

We compare our models to a wide range of latent editing methods, and show that by alleviating the bias they achieve finer semantic control and better identity preservation through a wider range of transformations.

Fairness

Training-Free Consistent Text-to-Image Generation

1 code implementation5 Feb 2024 Yoad Tewel, Omri Kaduri, Rinon Gal, Yoni Kasten, Lior Wolf, Gal Chechik, Yuval Atzmon

Text-to-image models offer a new level of creative flexibility by allowing users to guide the image generation process through natural language.

Story Visualization Text-to-Image Generation

MRGAN: Multi-Rooted 3D Shape Generation with Unsupervised Part Disentanglement

no code implementations25 Jul 2020 Rinon Gal, Amit Bermano, Hao Zhang, Daniel Cohen-Or

Our network encourages disentangled generation of semantic parts via two key ingredients: a root-mixing training strategy which helps decorrelate the different branches to facilitate disentanglement, and a set of loss terms designed with part disentanglement and shape semantics in mind.

3D Shape Generation Disentanglement

State-of-the-Art in the Architecture, Methods and Applications of StyleGAN

no code implementations28 Feb 2022 Amit H. Bermano, Rinon Gal, Yuval Alaluf, Ron Mokady, Yotam Nitzan, Omer Tov, Or Patashnik, Daniel Cohen-Or

Of these, StyleGAN offers a fascinating case study, owing to its remarkable visual quality and an ability to support a large array of downstream tasks.

Image Generation

Encoder-based Domain Tuning for Fast Personalization of Text-to-Image Models

no code implementations23 Feb 2023 Rinon Gal, Moab Arar, Yuval Atzmon, Amit H. Bermano, Gal Chechik, Daniel Cohen-Or

Specifically, we employ two components: First, an encoder that takes as an input a single image of a target concept from a given domain, e. g. a specific face, and learns to map it into a word-embedding representing the concept.

Novel Concepts

Key-Locked Rank One Editing for Text-to-Image Personalization

no code implementations2 May 2023 Yoad Tewel, Rinon Gal, Gal Chechik, Yuval Atzmon

The task of T2I personalization poses multiple hard challenges, such as maintaining high visual fidelity while allowing creative control, combining multiple personalized concepts in a single image, and keeping a small model size.

Domain-Agnostic Tuning-Encoder for Fast Personalization of Text-To-Image Models

no code implementations13 Jul 2023 Moab Arar, Rinon Gal, Yuval Atzmon, Gal Chechik, Daniel Cohen-Or, Ariel Shamir, Amit H. Bermano

Text-to-image (T2I) personalization allows users to guide the creative image generation process by combining their own visual concepts in natural language prompts.

Image Generation

Breathing Life Into Sketches Using Text-to-Video Priors

no code implementations21 Nov 2023 Rinon Gal, Yael Vinker, Yuval Alaluf, Amit H. Bermano, Daniel Cohen-Or, Ariel Shamir, Gal Chechik

A sketch is one of the most intuitive and versatile tools humans use to convey their ideas visually.

Consolidating Attention Features for Multi-view Image Editing

no code implementations22 Feb 2024 Or Patashnik, Rinon Gal, Daniel Cohen-Or, Jun-Yan Zhu, Fernando de la Torre

In this work, we focus on spatial control-based geometric manipulations and introduce a method to consolidate the editing process across various views.

Cannot find the paper you are looking for? You can Submit a new open access paper.