no code implementations • 4 Apr 2022 • Niv Cohen, Rinon Gal, Eli A. Meirom, Gal Chechik, Yuval Atzmon
We propose an architecture for solving PerVL that operates by extending the input vocabulary of a pretrained model with new word embeddings for the new personalized concepts.
no code implementations • 28 Feb 2022 • Amit H. Bermano, Rinon Gal, Yuval Alaluf, Ron Mokady, Yotam Nitzan, Omer Tov, Or Patashnik, Daniel Cohen-Or
Of these, StyleGAN offers a fascinating case study, owing to its remarkable visual quality and an ability to support a large array of downstream tasks.
no code implementations • 8 Feb 2022 • Yunzhe Liu, Rinon Gal, Amit H. Bermano, Baoquan Chen, Daniel Cohen-Or
We compare our models to a wide range of latent editing methods, and show that by alleviating the bias they achieve finer semantic control and better identity preservation through a wider range of transformations.
1 code implementation • 20 Jan 2022 • Rotem Tzaban, Ron Mokady, Rinon Gal, Amit H. Bermano, Daniel Cohen-Or
The ability of Generative Adversarial Networks to encode rich semantics within their latent space has been widely adopted for facial image editing.
1 code implementation • 30 Nov 2021 • Yuval Alaluf, Omer Tov, Ron Mokady, Rinon Gal, Amit H. Bermano
In this work, we introduce this approach into the realm of encoder-based inversion.
2 code implementations • 2 Aug 2021 • Rinon Gal, Or Patashnik, Haggai Maron, Gal Chechik, Daniel Cohen-Or
Can a generative model be trained to produce images from a specific domain, guided by a text prompt only, without seeing any image?
1 code implementation • 22 Jul 2021 • Yotam Nitzan, Rinon Gal, Ofir Brenner, Daniel Cohen-Or
For modern generative frameworks, this semantic encoding manifests as smooth, linear directions which affect image attributes in a disentangled manner.
2 code implementations • 11 Feb 2021 • Rinon Gal, Dana Cohen, Amit Bermano, Daniel Cohen-Or
In recent years, considerable progress has been made in the visual quality of Generative Adversarial Networks (GANs).
no code implementations • 25 Jul 2020 • Rinon Gal, Amit Bermano, Hao Zhang, Daniel Cohen-Or
Our network encourages disentangled generation of semantic parts via two key ingredients: a root-mixing training strategy which helps decorrelate the different branches to facilitate disentanglement, and a set of loss terms designed with part disentanglement and shape semantics in mind.