no code implementations • ECCV 2020 • Oren Katzir, Dani Lischinski, Daniel Cohen-Or
We mitigate this by descending the deep layers of a pre-trained network, where the deep features contain more semantics, and applying the translation between these deep features.
no code implementations • 3 Apr 2022 • Oren Katzir, Dani Lischinski, Daniel Cohen-Or
We introduce an unsupervised technique for encoding point clouds into a canonical shape representation, by disentangling shape and pose.
no code implementations • 11 Feb 2022 • Oren Katzir, Vicky Perepelook, Dani Lischinski, Daniel Cohen-Or
Truncation is widely used in generative models for improving the quality of the generated samples, at the expense of reducing their diversity.
1 code implementation • 31 Jan 2022 • Yuval Alaluf, Or Patashnik, Zongze Wu, Asif Zamir, Eli Shechtman, Dani Lischinski, Daniel Cohen-Or
In particular, we demonstrate that while StyleGAN3 can be trained on unaligned data, one can still use aligned data for training, without hindering the ability to generate unaligned imagery.
no code implementations • 25 Jan 2022 • Xingguang Yan, Liqiang Lin, Niloy J. Mitra, Dani Lischinski, Daniel Cohen-Or, Hui Huang
We present ShapeFormer, a transformer-based network that produces a distribution of object completions, conditioned on incomplete, and possibly noisy, point clouds.
1 code implementation • 29 Nov 2021 • Omri Avrahami, Dani Lischinski, Ohad Fried
Natural language offers a highly intuitive interface for image editing.
text-guided-image-editing
Zero-Shot Text-to-Image Generation
no code implementations • 29 Nov 2021 • Matan Levy, Rami Ben-Ari, Dani Lischinski
Chart question answering (CQA) is a task used for assessing chart comprehension, which is fundamentally different from understanding natural images.
Ranked #1 on
Visual Question Answering
on PlotQA-D1
1 code implementation • ICLR 2022 • Zongze Wu, Yotam Nitzan, Eli Shechtman, Dani Lischinski
Several works already utilize some basic properties of aligned StyleGAN models to perform image-to-image translation.
1 code implementation • ICCV 2021 • Jinming Cao, Hanchao Leng, Dani Lischinski, Danny Cohen-Or, Changhe Tu, Yangyan Li
The reason is that the learnt weights for balancing the importance between the shape and base components in ShapeConv become constants in the inference phase, and thus can be fused into the following convolution, resulting in a network that is identical to one with vanilla convolutional layers.
Ranked #3 on
Semantic Segmentation
on Stanford2D3D
no code implementations • 7 Jun 2021 • Omri Avrahami, Dani Lischinski, Ohad Fried
In the second stage, we merge the rooted models by averaging their weights and fine-tuning them for each specific domain, using only data generated by the original trained models.
4 code implementations • ICCV 2021 • Or Patashnik, Zongze Wu, Eli Shechtman, Daniel Cohen-Or, Dani Lischinski
Inspired by the ability of StyleGAN to generate highly realistic images in a variety of domains, much recent work has focused on understanding how to use the latent spaces of StyleGAN to manipulate generated and real images.
Ranked #1 on
Image Manipulation
on 10-Monty-Hall
(using extra training data)
no code implementations • 26 Dec 2020 • Sarah Gingichashvili, Dani Lischinski
Edge-preserving filters play an essential role in some of the most basic tasks of computational photography, such as abstraction, tonemapping, detail enhancement and texture removal, to name a few.
5 code implementations • CVPR 2021 • Zongze Wu, Dani Lischinski, Eli Shechtman
Manipulation of visual attributes via these StyleSpace controls is shown to be better disentangled than via those proposed in previous works.
no code implementations • 22 Jun 2020 • Mingyi Shi, Kfir Aberman, Andreas Aristidou, Taku Komura, Dani Lischinski, Daniel Cohen-Or, Baoquan Chen
We introduce MotioNet, a deep neural network that directly reconstructs the motion of a 3D human skeleton from monocular video. While previous methods rely on either rigging or inverse kinematics (IK) to associate a consistent skeleton with temporally coherent joint rotations, our method is the first data-driven approach that directly outputs a kinematic skeleton, which is a complete, commonly used, motion representation.
1 code implementation • 22 Jun 2020 • Jinming Cao, Yangyan Li, Mingchao Sun, Ying Chen, Dani Lischinski, Daniel Cohen-Or, Baoquan Chen, Changhe Tu
Moreover, in the inference phase, the depthwise convolution is folded into the conventional convolution, reducing the computation to be exactly equivalent to that of a convolutional layer without over-parameterization.
1 code implementation • 12 May 2020 • Kfir Aberman, Yijia Weng, Dani Lischinski, Daniel Cohen-Or, Baoquan Chen
In this paper, we present a novel data-driven framework for motion style transfer, which learns from an unpaired collection of motions with style labels, and enables transferring motion styles not observed during training.
1 code implementation • 12 May 2020 • Kfir Aberman, Peizhuo Li, Dani Lischinski, Olga Sorkine-Hornung, Daniel Cohen-Or, Baoquan Chen
In other words, our operators form the building blocks of a new deep motion processing framework that embeds the motion into a common latent space, shared by a collection of homeomorphic skeletons.
no code implementations • 10 Jan 2020 • Omry Sendik, Dani Lischinski, Daniel Cohen-Or
The emergence of deep generative models has recently enabled the automatic generation of massive amounts of graphical content, both in 2D and in 3D.
no code implementations • 8 Dec 2019 • Yingda Yin, Qingnan Fan, Dong-Dong Chen, Yujie Wang, Angelica Aviles-Rivero, Ruoteng Li, Carola-Bibiane Schnlieb, Dani Lischinski, Baoquan Chen
Reflections are very common phenomena in our daily photography, which distract people's attention from the scene behind the glass.
no code implementations • 13 Jun 2019 • Eytan Lifshitz, Dani Lischinski
In this paper, we present a new, physically-based, approach for estimating illuminant chromaticity from interreflections of light between diffuse surfaces.
no code implementations • 4 Jun 2019 • Oren Katzir, Dani Lischinski, Daniel Cohen-Or
Our translation is performed in a cascaded, deep-to-shallow, fashion, along the deep feature hierarchy: we first translate between the deepest layers that encode the higher-level semantic content of the image, proceeding to translate the shallower layers, conditioned on the deeper ones.
2 code implementations • 5 May 2019 • Kfir Aberman, Rundi Wu, Dani Lischinski, Baoquan Chen, Daniel Cohen-Or
In order to achieve our goal, we learn to extract, directly from a video, a high-level latent motion representation, which is invariant to the skeleton geometry and the camera view.
no code implementations • 14 Jan 2019 • Omry Sendik, Dani Lischinski, Daniel Cohen-Or
Recent GAN-based architectures have been able to deliver impressive performance on the general task of image-to-image translation.
no code implementations • ECCV 2018 • Di Lin, Yuanfeng Ji, Dani Lischinski, Daniel Cohen-Or, Hui Huang
Accurate semantic image segmentation requires the joint consideration of local appearance, semantic information, and global scene context.
no code implementations • 21 Aug 2018 • Kfir Aberman, Mingyi Shi, Jing Liao, Dani Lischinski, Baoquan Chen, Daniel Cohen-Or
After training a deep generative network using a reference video capturing the appearance and dynamics of a target actor, we are able to generate videos where this actor reenacts other performances.
1 code implementation • 12 Aug 2018 • Zhijie Wu, Xiang Wang, Di Lin, Dani Lischinski, Daniel Cohen-Or, Hui Huang
The key idea is that during the analysis, the two branches exchange information between them, thereby learning the dependencies between structure and geometry and encoding two augmented features, which are then fused into a single latent code.
Graphics
no code implementations • 21 May 2018 • Jinming Cao, Oren Katzir, Peng Jiang, Dani Lischinski, Danny Cohen-Or, Changhe Tu, Yangyan Li
The key idea is that by learning to separately extract both the common and the domain-specific features, one can synthesize more target domain data with supervision, thereby boosting the domain adaptation performance.
1 code implementation • 11 May 2018 • Yang Zhou, Zhen Zhu, Xiang Bai, Dani Lischinski, Daniel Cohen-Or, Hui Huang
We demonstrate that this conceptually simple approach is highly effective for capturing large-scale structures, as well as other non-stationary attributes of the input exemplar.
2 code implementations • 10 May 2018 • Kfir Aberman, Jing Liao, Mingyi Shi, Dani Lischinski, Baoquan Chen, Daniel Cohen-Or
Correspondence between images is a fundamental problem in computer vision, with a variety of graphics applications.
no code implementations • 22 Nov 2017 • Zhenhua Wang, Fanglin Gu, Dani Lischinski, Daniel Cohen-Or, Changhe Tu, Baoquan Chen
Contextual information provides important cues for disambiguating visually similar pixels in scene segmentation.
no code implementations • ICCV 2017 • Lei Zhu, Chi-Wing Fu, Dani Lischinski, Pheng-Ann Heng
A third prior is defined on the rain-streak layer R, based on similarity of patches to the extracted rain patches.
no code implementations • 18 Aug 2016 • Huayong Xu, Yangyan Li, Wenzheng Chen, Dani Lischinski, Daniel Cohen-Or, Baoquan Chen
We show that the resulting P-maps may be used to evaluate how likely a rectangle proposal is to contain an instance of the class, and further process good proposals to produce an accurate object cutout mask.
no code implementations • 10 Apr 2016 • Wenzheng Chen, Huan Wang, Yangyan Li, Hao Su, Zhenhua Wang, Changhe Tu, Dani Lischinski, Daniel Cohen-Or, Baoquan Chen
Human 3D pose estimation from a single image is a challenging task with numerous applications.