Search Results for author: Andrey Voynov

Found 15 papers, 8 papers with code

ReNoise: Real Image Inversion Through Iterative Noising

no code implementations21 Mar 2024 Daniel Garibi, Or Patashnik, Andrey Voynov, Hadar Averbuch-Elor, Daniel Cohen-Or

However, applying these methods to real images necessitates the inversion of the images into the domain of the pretrained diffusion model.

Denoising Image Manipulation

Style Aligned Image Generation via Shared Attention

1 code implementation4 Dec 2023 Amir Hertz, Andrey Voynov, Shlomi Fruchter, Daniel Cohen-Or

Large-scale Text-to-Image (T2I) models have rapidly gained prominence across creative fields, generating visually compelling outputs from textual prompts.

Image Generation

AnyLens: A Generative Diffusion Model with Any Rendering Lens

no code implementations29 Nov 2023 Andrey Voynov, Amir Hertz, Moab Arar, Shlomi Fruchter, Daniel Cohen-Or

State-of-the-art diffusion models can generate highly realistic images based on various conditioning like text, segmentation, and depth.

Text Segmentation

Concept Decomposition for Visual Exploration and Inspiration

no code implementations29 May 2023 Yael Vinker, Andrey Voynov, Daniel Cohen-Or, Ariel Shamir

Each node in the tree represents a sub-concept using a learned vector embedding injected into the latent space of a pretrained text-to-image model.

P+: Extended Textual Conditioning in Text-to-Image Generation

no code implementations16 Mar 2023 Andrey Voynov, Qinghao Chu, Daniel Cohen-Or, Kfir Aberman

Furthermore, we utilize the unique properties of this space to achieve previously unattainable results in object-style mixing using text-to-image models.

Denoising Text-to-Image Generation

Sketch-Guided Text-to-Image Diffusion Models

no code implementations24 Nov 2022 Andrey Voynov, Kfir Aberman, Daniel Cohen-Or

In this work, we introduce a universal approach to guide a pretrained text-to-image diffusion model, with a spatial map from another domain (e. g., sketch) during inference time.

Denoising Sketch-to-Image Translation

When, Why, and Which Pretrained GANs Are Useful?

1 code implementation ICLR 2022 Timofey Grigoryev, Andrey Voynov, Artem Babenko

The literature has proposed several methods to finetune pretrained GANs on new datasets, which typically results in higher performance compared to training from scratch, especially in the limited-data regime.

Label-Efficient Semantic Segmentation with Diffusion Models

1 code implementation ICLR 2022 Dmitry Baranchuk, Ivan Rubachev, Andrey Voynov, Valentin Khrulkov, Artem Babenko

Denoising diffusion probabilistic models have recently received much research attention since they outperform alternative approaches, such as GANs, and currently provide state-of-the-art generative performance.

Denoising Segmentation +2

On Self-Supervised Image Representations for GAN Evaluation

no code implementations ICLR 2021 Stanislav Morozov, Andrey Voynov, Artem Babenko

The embeddings from CNNs pretrained on Imagenet classification are de-facto standard image representations for assessing GANs via FID, Precision and Recall measures.

Contrastive Learning General Classification

Navigating the GAN Parameter Space for Semantic Image Editing

2 code implementations CVPR 2021 Anton Cherepkov, Andrey Voynov, Artem Babenko

In contrast to existing works, which mostly operate by latent codes, we discover interpretable directions in the space of the generator parameters.

Image Restoration Image-to-Image Translation +1

Object Segmentation Without Labels with Large-Scale Generative Models

1 code implementation8 Jun 2020 Andrey Voynov, Stanislav Morozov, Artem Babenko

The recent rise of unsupervised and self-supervised learning has dramatically reduced the dependency on labeled data, providing effective image representations for transfer to downstream vision tasks.

Image Classification Object +5

RPGAN: GANs Interpretability via Random Routing

1 code implementation23 Dec 2019 Andrey Voynov, Artem Babenko

In this paper, we introduce Random Path Generative Adversarial Network (RPGAN) -- an alternative design of GANs that can serve as a tool for generative model analysis.

Generative Adversarial Network Image Generation +1

RPGAN: random paths as a latent space for GAN interpretability

1 code implementation25 Sep 2019 Andrey Voynov, Artem Babenko

In this paper, we introduce Random Path Generative Adversarial Network (RPGAN) --- an alternative scheme of GANs that can serve as a tool for generative model analysis.

Generative Adversarial Network Image Generation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.