Search Results for author: Dani Lischinski

Found 46 papers, 24 papers with code

A Holistic Approach for Data-Driven Object Cutout

no code implementations18 Aug 2016 Huayong Xu, Yangyan Li, Wenzheng Chen, Dani Lischinski, Daniel Cohen-Or, Baoquan Chen

We show that the resulting P-maps may be used to evaluate how likely a rectangle proposal is to contain an instance of the class, and further process good proposals to produce an accurate object cutout mask.

Object

Joint Bi-Layer Optimization for Single-Image Rain Streak Removal

no code implementations ICCV 2017 Lei Zhu, Chi-Wing Fu, Dani Lischinski, Pheng-Ann Heng

A third prior is defined on the rain-streak layer R, based on similarity of patches to the extracted rain patches.

Rain Removal

Neuron-level Selective Context Aggregation for Scene Segmentation

no code implementations22 Nov 2017 Zhenhua Wang, Fanglin Gu, Dani Lischinski, Daniel Cohen-Or, Changhe Tu, Baoquan Chen

Contextual information provides important cues for disambiguating visually similar pixels in scene segmentation.

Scene Segmentation Segmentation

Neural Best-Buddies: Sparse Cross-Domain Correspondence

2 code implementations10 May 2018 Kfir Aberman, Jing Liao, Mingyi Shi, Dani Lischinski, Baoquan Chen, Daniel Cohen-Or

Correspondence between images is a fundamental problem in computer vision, with a variety of graphics applications.

Image Morphing

Non-Stationary Texture Synthesis by Adversarial Expansion

1 code implementation11 May 2018 Yang Zhou, Zhen Zhu, Xiang Bai, Dani Lischinski, Daniel Cohen-Or, Hui Huang

We demonstrate that this conceptually simple approach is highly effective for capturing large-scale structures, as well as other non-stationary attributes of the input exemplar.

Generative Adversarial Network Texture Synthesis

DiDA: Disentangled Synthesis for Domain Adaptation

no code implementations21 May 2018 Jinming Cao, Oren Katzir, Peng Jiang, Dani Lischinski, Danny Cohen-Or, Changhe Tu, Yangyan Li

The key idea is that by learning to separately extract both the common and the domain-specific features, one can synthesize more target domain data with supervision, thereby boosting the domain adaptation performance.

Disentanglement Unsupervised Domain Adaptation

Structure-aware Generative Network for 3D-Shape Modeling

1 code implementation12 Aug 2018 Zhijie Wu, Xiang Wang, Di Lin, Dani Lischinski, Daniel Cohen-Or, Hui Huang

The key idea is that during the analysis, the two branches exchange information between them, thereby learning the dependencies between structure and geometry and encoding two augmented features, which are then fused into a single latent code.

Graphics

Deep Video-Based Performance Cloning

no code implementations21 Aug 2018 Kfir Aberman, Mingyi Shi, Jing Liao, Dani Lischinski, Baoquan Chen, Daniel Cohen-Or

After training a deep generative network using a reference video capturing the appearance and dynamics of a target actor, we are able to generate videos where this actor reenacts other performances.

Multi-Scale Context Intertwining for Semantic Segmentation

no code implementations ECCV 2018 Di Lin, Yuanfeng Ji, Dani Lischinski, Daniel Cohen-Or, Hui Huang

Accurate semantic image segmentation requires the joint consideration of local appearance, semantic information, and global scene context.

Image Segmentation Segmentation +1

CrossNet: Latent Cross-Consistency for Unpaired Image Translation

no code implementations14 Jan 2019 Omry Sendik, Dani Lischinski, Daniel Cohen-Or

Recent GAN-based architectures have been able to deliver impressive performance on the general task of image-to-image translation.

Image-to-Image Translation Translation

Learning Character-Agnostic Motion for Motion Retargeting in 2D

2 code implementations5 May 2019 Kfir Aberman, Rundi Wu, Dani Lischinski, Baoquan Chen, Daniel Cohen-Or

In order to achieve our goal, we learn to extract, directly from a video, a high-level latent motion representation, which is invariant to the skeleton geometry and the camera view.

3D Reconstruction motion retargeting +2

Cross-Domain Cascaded Deep Feature Translation

no code implementations4 Jun 2019 Oren Katzir, Dani Lischinski, Daniel Cohen-Or

Our translation is performed in a cascaded, deep-to-shallow, fashion, along the deep feature hierarchy: we first translate between the deepest layers that encode the higher-level semantic content of the image, proceeding to translate the shallower layers, conditioned on the deeper ones.

Image-to-Image Translation Translation

Illuminant Chromaticity Estimation from Interreflections

no code implementations13 Jun 2019 Eytan Lifshitz, Dani Lischinski

In this paper, we present a new, physically-based, approach for estimating illuminant chromaticity from interreflections of light between diffuse surfaces.

Color Constancy

Unsupervised multi-modal Styled Content Generation

no code implementations10 Jan 2020 Omry Sendik, Dani Lischinski, Daniel Cohen-Or

The emergence of deep generative models has recently enabled the automatic generation of massive amounts of graphical content, both in 2D and in 3D.

Unpaired Motion Style Transfer from Video to Animation

1 code implementation12 May 2020 Kfir Aberman, Yijia Weng, Dani Lischinski, Daniel Cohen-Or, Baoquan Chen

In this paper, we present a novel data-driven framework for motion style transfer, which learns from an unpaired collection of motions with style labels, and enables transferring motion styles not observed during training.

3D Reconstruction Motion Style Transfer +1

Skeleton-Aware Networks for Deep Motion Retargeting

1 code implementation12 May 2020 Kfir Aberman, Peizhuo Li, Dani Lischinski, Olga Sorkine-Hornung, Daniel Cohen-Or, Baoquan Chen

In other words, our operators form the building blocks of a new deep motion processing framework that embeds the motion into a common latent space, shared by a collection of homeomorphic skeletons.

motion retargeting Motion Synthesis

DO-Conv: Depthwise Over-parameterized Convolutional Layer

1 code implementation22 Jun 2020 Jinming Cao, Yangyan Li, Mingchao Sun, Ying Chen, Dani Lischinski, Daniel Cohen-Or, Baoquan Chen, Changhe Tu

Moreover, in the inference phase, the depthwise convolution is folded into the conventional convolution, reducing the computation to be exactly equivalent to that of a convolutional layer without over-parameterization.

Image Classification

MotioNet: 3D Human Motion Reconstruction from Monocular Video with Skeleton Consistency

no code implementations22 Jun 2020 Mingyi Shi, Kfir Aberman, Andreas Aristidou, Taku Komura, Dani Lischinski, Daniel Cohen-Or, Baoquan Chen

We introduce MotioNet, a deep neural network that directly reconstructs the motion of a 3D human skeleton from monocular video. While previous methods rely on either rigging or inverse kinematics (IK) to associate a consistent skeleton with temporally coherent joint rotations, our method is the first data-driven approach that directly outputs a kinematic skeleton, which is a complete, commonly used, motion representation.

StyleSpace Analysis: Disentangled Controls for StyleGAN Image Generation

6 code implementations CVPR 2021 Zongze Wu, Dani Lischinski, Eli Shechtman

Manipulation of visual attributes via these StyleSpace controls is shown to be better disentangled than via those proposed in previous works.

Attribute Image Generation

Evaluation and Comparison of Edge-Preserving Filters

no code implementations26 Dec 2020 Sarah Gingichashvili, Dani Lischinski

Edge-preserving filters play an essential role in some of the most basic tasks of computational photography, such as abstraction, tonemapping, detail enhancement and texture removal, to name a few.

StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery

5 code implementations ICCV 2021 Or Patashnik, Zongze Wu, Eli Shechtman, Daniel Cohen-Or, Dani Lischinski

Inspired by the ability of StyleGAN to generate highly realistic images in a variety of domains, much recent work has focused on understanding how to use the latent spaces of StyleGAN to manipulate generated and real images.

Image Manipulation

GAN Cocktail: mixing GANs without dataset access

1 code implementation7 Jun 2021 Omri Avrahami, Dani Lischinski, Ohad Fried

In the second stage, we merge the rooted models by averaging their weights and fine-tuning them for each specific domain, using only data generated by the original trained models.

Transfer Learning

ShapeConv: Shape-aware Convolutional Layer for Indoor RGB-D Semantic Segmentation

1 code implementation ICCV 2021 Jinming Cao, Hanchao Leng, Dani Lischinski, Danny Cohen-Or, Changhe Tu, Yangyan Li

The reason is that the learnt weights for balancing the importance between the shape and base components in ShapeConv become constants in the inference phase, and thus can be fused into the following convolution, resulting in a network that is identical to one with vanilla convolutional layers.

Segmentation Semantic Segmentation +1

Classification-Regression for Chart Comprehension

1 code implementation29 Nov 2021 Matan Levy, Rami Ben-Ari, Dani Lischinski

Our model is particularly well suited for realistic questions with out-of-vocabulary answers that require regression.

Chart Question Answering Classification +3

ShapeFormer: Transformer-based Shape Completion via Sparse Representation

1 code implementation CVPR 2022 Xingguang Yan, Liqiang Lin, Niloy J. Mitra, Dani Lischinski, Daniel Cohen-Or, Hui Huang

We present ShapeFormer, a transformer-based network that produces a distribution of object completions, conditioned on incomplete, and possibly noisy, point clouds.

Third Time's the Charm? Image and Video Editing with StyleGAN3

1 code implementation31 Jan 2022 Yuval Alaluf, Or Patashnik, Zongze Wu, Asif Zamir, Eli Shechtman, Dani Lischinski, Daniel Cohen-Or

In particular, we demonstrate that while StyleGAN3 can be trained on unaligned data, one can still use aligned data for training, without hindering the ability to generate unaligned imagery.

Disentanglement Image Generation +1

Multi-level Latent Space Structuring for Generative Control

no code implementations11 Feb 2022 Oren Katzir, Vicky Perepelook, Dani Lischinski, Daniel Cohen-Or

Truncation is widely used in generative models for improving the quality of the generated samples, at the expense of reducing their diversity.

Shape-Pose Disentanglement using SE(3)-equivariant Vector Neurons

no code implementations3 Apr 2022 Oren Katzir, Dani Lischinski, Daniel Cohen-Or

We introduce an unsupervised technique for encoding point clouds into a canonical shape representation, by disentangling shape and pose.

Disentanglement Translation

Blended Latent Diffusion

1 code implementation6 Jun 2022 Omri Avrahami, Ohad Fried, Dani Lischinski

Our solution leverages a recent text-to-image Latent Diffusion Model (LDM), which speeds up diffusion by operating in a lower-dimensional latent space.

Image Inpainting text-guided-image-editing +1

SpaText: Spatio-Textual Representation for Controllable Image Generation

no code implementations CVPR 2023 Omri Avrahami, Thomas Hayes, Oran Gafni, Sonal Gupta, Yaniv Taigman, Devi Parikh, Dani Lischinski, Ohad Fried, Xi Yin

Due to lack of large-scale datasets that have a detailed textual description for each region in the image, we choose to leverage the current large-scale text-to-image datasets and base our approach on a novel CLIP-based spatio-textual representation, and show its effectiveness on two state-of-the-art diffusion models: pixel-based and latent-based.

Text-to-Image Generation

Data Roaming and Quality Assessment for Composed Image Retrieval

1 code implementation16 Mar 2023 Matan Levy, Rami Ben-Ari, Nir Darshan, Dani Lischinski

To address these shortcomings, we introduce the Large Scale Composed Image Retrieval (LaSCo) dataset, a new CoIR dataset which is ten times larger than existing ones.

Composed Image Retrieval (CoIR) Retrieval

Break-A-Scene: Extracting Multiple Concepts from a Single Image

1 code implementation25 May 2023 Omri Avrahami, Kfir Aberman, Ohad Fried, Daniel Cohen-Or, Dani Lischinski

Text-to-image model personalization aims to introduce a user-provided concept to the model, allowing its synthesis in diverse contexts.

Complex Scene Breaking and Synthesis

Blended-NeRF: Zero-Shot Object Generation and Blending in Existing Neural Radiance Fields

1 code implementation22 Jun 2023 Ori Gordon, Omri Avrahami, Dani Lischinski

We present Blended-NeRF, a robust and flexible framework for editing a specific region of interest in an existing NeRF scene, based on text prompts, along with a 3D ROI box.

SVNR: Spatially-variant Noise Removal with Denoising Diffusion

no code implementations28 Jun 2023 Naama Pearl, Yaron Brodsky, Dana Berman, Assaf Zomet, Alex Rav Acha, Daniel Cohen-Or, Dani Lischinski

Our formulation also accounts for the correlation that exists between the condition image and the samples along the modified diffusion process.

Image Denoising

Noise-Free Score Distillation

no code implementations26 Oct 2023 Oren Katzir, Or Patashnik, Daniel Cohen-Or, Dani Lischinski

Score Distillation Sampling (SDS) has emerged as the de facto approach for text-to-content generation in non-image domains.

The Chosen One: Consistent Characters in Text-to-Image Diffusion Models

1 code implementation16 Nov 2023 Omri Avrahami, Amir Hertz, Yael Vinker, Moab Arar, Shlomi Fruchter, Ohad Fried, Daniel Cohen-Or, Dani Lischinski

Our quantitative analysis demonstrates that our method strikes a better balance between prompt alignment and identity consistency compared to the baseline methods, and these findings are reinforced by a user study.

Consistent Character Generation Story Visualization

S2ST: Image-to-Image Translation in the Seed Space of Latent Diffusion

no code implementations30 Nov 2023 Or Greenberg, Eran Kishon, Dani Lischinski

Image-to-image translation (I2IT) refers to the process of transforming images from a source domain to a target domain while maintaining a fundamental connection in terms of image content.

Image-to-Image Translation Translation

Mismatch Quest: Visual and Textual Feedback for Image-Text Misalignment

no code implementations5 Dec 2023 Brian Gordon, Yonatan Bitton, Yonatan Shafir, Roopal Garg, Xi Chen, Dani Lischinski, Daniel Cohen-Or, Idan Szpektor

While existing image-text alignment models reach high quality binary assessments, they fall short of pinpointing the exact source of misalignment.

Explanation Generation Visual Grounding

Generating Non-Stationary Textures using Self-Rectification

1 code implementation5 Jan 2024 Yang Zhou, Rongjun Xiao, Dani Lischinski, Daniel Cohen-Or, Hui Huang

This paper addresses the challenge of example-based non-stationary texture synthesis.

Texture Synthesis

Cross-Domain Cascaded Deep Translation

no code implementations ECCV 2020 Oren Katzir, Dani Lischinski, Daniel Cohen-Or

We mitigate this by descending the deep layers of a pre-trained network, where the deep features contain more semantics, and applying the translation between these deep features.

Image-to-Image Translation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.