Search Results for author: Oren Katzir

Found 7 papers, 1 papers with code

Cross-Domain Cascaded Deep Translation

no code implementations ECCV 2020 Oren Katzir, Dani Lischinski, Daniel Cohen-Or

We mitigate this by descending the deep layers of a pre-trained network, where the deep features contain more semantics, and applying the translation between these deep features.

Image-to-Image Translation Translation

Noise-Free Score Distillation

no code implementations26 Oct 2023 Oren Katzir, Or Patashnik, Daniel Cohen-Or, Dani Lischinski

Score Distillation Sampling (SDS) has emerged as the de facto approach for text-to-content generation in non-image domains.

Shape-Pose Disentanglement using SE(3)-equivariant Vector Neurons

no code implementations3 Apr 2022 Oren Katzir, Dani Lischinski, Daniel Cohen-Or

We introduce an unsupervised technique for encoding point clouds into a canonical shape representation, by disentangling shape and pose.

Disentanglement Translation

Multi-level Latent Space Structuring for Generative Control

no code implementations11 Feb 2022 Oren Katzir, Vicky Perepelook, Dani Lischinski, Daniel Cohen-Or

Truncation is widely used in generative models for improving the quality of the generated samples, at the expense of reducing their diversity.

Cross-Domain Cascaded Deep Feature Translation

no code implementations4 Jun 2019 Oren Katzir, Dani Lischinski, Daniel Cohen-Or

Our translation is performed in a cascaded, deep-to-shallow, fashion, along the deep feature hierarchy: we first translate between the deepest layers that encode the higher-level semantic content of the image, proceeding to translate the shallower layers, conditioned on the deeper ones.

Image-to-Image Translation Translation

CompoNet: Learning to Generate the Unseen by Part Synthesis and Composition

1 code implementation ICCV 2019 Nadav Schor, Oren Katzir, Hao Zhang, Daniel Cohen-Or

Data-driven generative modeling has made remarkable progress by leveraging the power of deep neural networks.

DiDA: Disentangled Synthesis for Domain Adaptation

no code implementations21 May 2018 Jinming Cao, Oren Katzir, Peng Jiang, Dani Lischinski, Danny Cohen-Or, Changhe Tu, Yangyan Li

The key idea is that by learning to separately extract both the common and the domain-specific features, one can synthesize more target domain data with supervision, thereby boosting the domain adaptation performance.

Disentanglement Unsupervised Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.