Search Results for author: Eli Shechtman

Found 68 papers, 44 papers with code

BlobGAN: Spatially Disentangled Scene Representations

no code implementations5 May 2022 Dave Epstein, Taesung Park, Richard Zhang, Eli Shechtman, Alexei A. Efros

Blobs are differentiably placed onto a feature grid that is decoded into an image by a generative adversarial network.

Any-resolution Training for High-resolution Image Synthesis

no code implementations14 Apr 2022 Lucy Chai, Michael Gharbi, Eli Shechtman, Phillip Isola, Richard Zhang

We introduce continuous-scale training, a process that samples patches at random scales to train a new generator with variable output resolutions.

Image Generation

Neural Neighbor Style Transfer

1 code implementation24 Mar 2022 Nicholas Kolkin, Michal Kucera, Sylvain Paris, Daniel Sykora, Eli Shechtman, Greg Shakhnarovich

We propose Neural Neighbor Style Transfer (NNST), a pipeline that offers state-of-the-art quality, generalization, and competitive efficiency for artistic style transfer.

Style Transfer

CM-GAN: Image Inpainting with Cascaded Modulation GAN and Object-Aware Training

1 code implementation22 Mar 2022 Haitian Zheng, Zhe Lin, Jingwan Lu, Scott Cohen, Eli Shechtman, Connelly Barnes, Jianming Zhang, Ning Xu, Sohrab Amirghodsi, Jiebo Luo

Recent image inpainting methods have made great progress but often struggle to generate plausible image structures when dealing with large holes in complex images.

Image Inpainting

InsetGAN for Full-Body Image Generation

1 code implementation14 Mar 2022 Anna Frühstück, Krishna Kumar Singh, Eli Shechtman, Niloy J. Mitra, Peter Wonka, Jingwan Lu

Instead of modeling this complex domain with a single GAN, we propose a novel method to combine multiple pretrained GANs, where one GAN generates a global canvas (e. g., human body) and a set of specialized GANs, or insets, focus on different parts (e. g., faces, shoes) that can be seamlessly inserted onto the global canvas.

Image Generation

Third Time's the Charm? Image and Video Editing with StyleGAN3

1 code implementation31 Jan 2022 Yuval Alaluf, Or Patashnik, Zongze Wu, Asif Zamir, Eli Shechtman, Dani Lischinski, Daniel Cohen-Or

In particular, we demonstrate that while StyleGAN3 can be trained on unaligned data, one can still use aligned data for training, without hindering the ability to generate unaligned imagery.

Disentanglement Image Generation

Ensembling Off-the-shelf Models for GAN Training

1 code implementation16 Dec 2021 Nupur Kumari, Richard Zhang, Eli Shechtman, Jun-Yan Zhu

Can the collective "knowledge" from a large bank of pretrained vision models be leveraged to improve GAN training?

Image Generation

GAN-Supervised Dense Visual Alignment

1 code implementation9 Dec 2021 William Peebles, Jun-Yan Zhu, Richard Zhang, Antonio Torralba, Alexei A. Efros, Eli Shechtman

We propose GAN-Supervised Learning, a framework for learning discriminative models and their GAN-generated training data jointly end-to-end.

Data Augmentation Dense Pixel Correspondence Estimation

STALP: Style Transfer with Auxiliary Limited Pairing

no code implementations20 Oct 2021 David Futschik, Michal Kučera, Michal Lukáč, Zhaowen Wang, Eli Shechtman, Daniel Sýkora

We present an approach to example-based stylization of images that uses a single pair of a source image and its stylized counterpart.

Style Transfer Translation

Real Image Inversion via Segments

1 code implementation12 Oct 2021 David Futschik, Michal Lukáč, Eli Shechtman, Daniel Sýkora

In this short report, we present a simple, yet effective approach to editing real images via generative adversarial networks (GAN).

KDSalBox: A toolbox of efficient knowledge-distilled saliency models

no code implementations NeurIPS Workshop SVRHM 2021 Ard Kastrati, Zoya Bylinskii, Eli Shechtman

Dozens of saliency models have been designed over the last few decades, targeted at diverse applications ranging from image compression and retargeting to robot navigation, surveillance, and distractor detection.

Image Compression Robot Navigation

Ensembling with Deep Generative Views

no code implementations CVPR 2021 Lucy Chai, Jun-Yan Zhu, Eli Shechtman, Phillip Isola, Richard Zhang

Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.

Image Classification

Few-shot Image Generation via Cross-domain Correspondence

1 code implementation CVPR 2021 Utkarsh Ojha, Yijun Li, Jingwan Lu, Alexei A. Efros, Yong Jae Lee, Eli Shechtman, Richard Zhang

Training generative models, such as GANs, on a target domain containing limited examples (e. g., 10) can easily result in overfitting.

Image Generation

Modulated Periodic Activations for Generalizable Local Functional Representations

2 code implementations ICCV 2021 Ishit Mehta, Michaël Gharbi, Connelly Barnes, Eli Shechtman, Ravi Ramamoorthi, Manmohan Chandraker

Our approach produces generalizable functional representations of images, videos and shapes, and achieves higher reconstruction quality than prior works that are optimized for a single signal.

StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery

4 code implementations ICCV 2021 Or Patashnik, Zongze Wu, Eli Shechtman, Daniel Cohen-Or, Dani Lischinski

Inspired by the ability of StyleGAN to generate highly realistic images in a variety of domains, much recent work has focused on understanding how to use the latent spaces of StyleGAN to manipulate generated and real images.

 Ranked #1 on Image Manipulation on 10-Monty-Hall (using extra training data)

Image Manipulation

CharacterGAN: Few-Shot Keypoint Character Animation and Reposing

1 code implementation5 Feb 2021 Tobias Hinz, Matthew Fisher, Oliver Wang, Eli Shechtman, Stefan Wermter

Our model generates novel poses based on keypoint locations, which can be modified in real time while providing interactive feedback, allowing for intuitive reposing and animation.

Few-shot Image Generation with Elastic Weight Consolidation

no code implementations NeurIPS 2020 Yijun Li, Richard Zhang, Jingwan Lu, Eli Shechtman

Few-shot image generation seeks to generate more data of a given domain, with only few available training examples.

Image Generation

StyleSpace Analysis: Disentangled Controls for StyleGAN Image Generation

5 code implementations CVPR 2021 Zongze Wu, Dani Lischinski, Eli Shechtman

Manipulation of visual attributes via these StyleSpace controls is shown to be better disentangled than via those proposed in previous works.

Image Generation

Look here! A parametric learning based approach to redirect visual attention

no code implementations ECCV 2020 Youssef Alami Mejjati, Celso F. Gomez, Kwang In Kim, Eli Shechtman, Zoya Bylinskii

Extensions of our model allow for multi-style edits and the ability to both increase and attenuate attention in an image region.

Swapping Autoencoder for Deep Image Manipulation

3 code implementations NeurIPS 2020 Taesung Park, Jun-Yan Zhu, Oliver Wang, Jingwan Lu, Eli Shechtman, Alexei A. Efros, Richard Zhang

Deep generative models have become increasingly effective at producing realistic images from randomly sampled seeds, but using such models for controllable manipulation of existing images remains challenging.

Image Manipulation

Image Morphing with Perceptual Constraints and STN Alignment

1 code implementation29 Apr 2020 Noa Fish, Richard Zhang, Lilach Perry, Daniel Cohen-Or, Eli Shechtman, Connelly Barnes

In image morphing, a sequence of plausible frames are synthesized and composited together to form a smooth transformation between given instances.

Frame Image Morphing

MakeItTalk: Speaker-Aware Talking-Head Animation

3 code implementations27 Apr 2020 Yang Zhou, Xintong Han, Eli Shechtman, Jose Echevarria, Evangelos Kalogerakis, DIngzeyu Li

We present a method that generates expressive talking heads from a single facial image with audio as the only input.

Talking Face Generation Talking Head Generation

State of the Art on Neural Rendering

no code implementations8 Apr 2020 Ayush Tewari, Ohad Fried, Justus Thies, Vincent Sitzmann, Stephen Lombardi, Kalyan Sunkavalli, Ricardo Martin-Brualla, Tomas Simon, Jason Saragih, Matthias Nießner, Rohit Pandey, Sean Fanello, Gordon Wetzstein, Jun-Yan Zhu, Christian Theobalt, Maneesh Agrawala, Eli Shechtman, Dan B. Goldman, Michael Zollhöfer

Neural rendering is a new and rapidly emerging field that combines generative machine learning techniques with physical knowledge from computer graphics, e. g., by the integration of differentiable rendering into network training.

Image Generation Neural Rendering +1

Lifespan Age Transformation Synthesis

2 code implementations ECCV 2020 Roy Or-El, Soumyadip Sengupta, Ohad Fried, Eli Shechtman, Ira Kemelmacher-Shlizerman

Most existing aging methods are limited to changing the texture, overlooking transformations in head shape that occur during the human aging and growth process.

Face Age Editing Image Manipulation +3

Neural Puppet: Generative Layered Cartoon Characters

no code implementations4 Oct 2019 Omid Poursaeed, Vladimir G. Kim, Eli Shechtman, Jun Saito, Serge Belongie

We capture these subtle changes by applying an image translation network to refine the mesh rendering, providing an end-to-end model to generate new animations of a character with high visual quality.


UprightNet: Geometry-Aware Camera Orientation Estimation from Single Images

no code implementations ICCV 2019 Wenqi Xian, Zhengqi Li, Matthew Fisher, Jonathan Eisenmann, Eli Shechtman, Noah Snavely

We introduce UprightNet, a learning-based approach for estimating 2DoF camera orientation from a single RGB image of an indoor scene.

Text-based Editing of Talking-head Video

1 code implementation4 Jun 2019 Ohad Fried, Ayush Tewari, Michael Zollhöfer, Adam Finkelstein, Eli Shechtman, Dan B. Goldman, Kyle Genova, Zeyu Jin, Christian Theobalt, Maneesh Agrawala

To edit a video, the user has to only edit the transcript, and an optimization strategy then chooses segments of the input corpus as base material.

Face Model Frame +2

Im2Pencil: Controllable Pencil Illustration from Photographs

1 code implementation CVPR 2019 Yijun Li, Chen Fang, Aaron Hertzmann, Eli Shechtman, Ming-Hsuan Yang

We propose a high-quality photo-to-pencil translation method with fine-grained control over the drawing style.


Localizing Moments in Video with Temporal Language

1 code implementation EMNLP 2018 Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, Bryan Russell

To benchmark whether our model, and other recent video localization models, can effectively reason about temporal language, we collect the novel TEMPOral reasoning in video and language (TEMPO) dataset.

Video Understanding

MT-VAE: Learning Motion Transformations to Generate Multimodal Human Dynamics

1 code implementation ECCV 2018 Xinchen Yan, Akash Rastogi, Ruben Villegas, Kalyan Sunkavalli, Eli Shechtman, Sunil Hadap, Ersin Yumer, Honglak Lee

Our model jointly learns a feature embedding for motion modes (that the motion sequence can be reconstructed from) and a feature transformation that represents the transition of one motion mode to the next motion mode.

Human Dynamics motion prediction

Learning Blind Video Temporal Consistency

1 code implementation ECCV 2018 Wei-Sheng Lai, Jia-Bin Huang, Oliver Wang, Eli Shechtman, Ersin Yumer, Ming-Hsuan Yang

Our method takes the original unprocessed and per-frame processed videos as inputs to produce a temporally consistent video.

Colorization Frame +4

Deep Painterly Harmonization

12 code implementations9 Apr 2018 Fujun Luan, Sylvain Paris, Eli Shechtman, Kavita Bala

Copying an element from a photo and pasting it into a painting is a challenging task.


ST-GAN: Spatial Transformer Generative Adversarial Networks for Image Compositing

2 code implementations CVPR 2018 Chen-Hsuan Lin, Ersin Yumer, Oliver Wang, Eli Shechtman, Simon Lucey

We address the problem of finding realistic geometric corrections to a foreground object such that it appears natural when composited into a background image.

The Unreasonable Effectiveness of Deep Features as a Perceptual Metric

25 code implementations CVPR 2018 Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, Oliver Wang

We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics.


Multi-Content GAN for Few-Shot Font Style Transfer

6 code implementations CVPR 2018 Samaneh Azadi, Matthew Fisher, Vladimir Kim, Zhaowen Wang, Eli Shechtman, Trevor Darrell

In this work, we focus on the challenge of taking partial observations of highly-stylized text and generalizing the observations to generate unobserved glyphs in the ornamented typeface.

Font Style Transfer

Photorealistic Style Transfer with Screened Poisson Equation

1 code implementation28 Sep 2017 Roey Mechrez, Eli Shechtman, Lihi Zelnik-Manor

Recent work has shown impressive success in transferring painterly style to images.

Style Transfer

Training Deep Networks to be Spatially Sensitive

no code implementations ICCV 2017 Nicholas Kolkin, Gregory Shakhnarovich, Eli Shechtman

In many computer vision tasks, for example saliency prediction or semantic segmentation, the desired output is a foreground map that predicts pixels where some criteria is satisfied.

Saliency Prediction Semantic Segmentation

Localizing Moments in Video with Natural Language

2 code implementations ICCV 2017 Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, Bryan Russell

A key obstacle to training our MCN model is that current video datasets do not include pairs of localized video segments and referring expressions, or text descriptions which uniquely identify a corresponding moment.

Neural Face Editing with Intrinsic Image Disentangling

2 code implementations CVPR 2017 Zhixin Shu, Ersin Yumer, Sunil Hadap, Kalyan Sunkavalli, Eli Shechtman, Dimitris Samaras

Traditional face editing methods often require a number of sophisticated and task specific algorithms to be applied one after the other --- a process that is tedious, fragile, and computationally intensive.

Facial Editing

Deep Photo Style Transfer

22 code implementations CVPR 2017 Fujun Luan, Sylvain Paris, Eli Shechtman, Kavita Bala

This paper introduces a deep-learning approach to photographic style transfer that handles a large variety of image content while faithfully transferring the reference style.

Style Transfer

Saliency Driven Image Manipulation

1 code implementation7 Dec 2016 Roey Mechrez, Eli Shechtman, Lihi Zelnik-Manor

Have you ever taken a picture only to find out that an unimportant background object ended up being overly salient?

Image Manipulation

High-Resolution Image Inpainting using Multi-Scale Neural Patch Synthesis

1 code implementation CVPR 2017 Chao Yang, Xin Lu, Zhe Lin, Eli Shechtman, Oliver Wang, Hao Li

Recent advances in deep learning have shown exciting promise in filling large holes in natural images with semantically plausible and context aware details, impacting fundamental image manipulation tasks such as object removal.

Image Inpainting Image Manipulation

Generative Visual Manipulation on the Natural Image Manifold

1 code implementation12 Sep 2016 Jun-Yan Zhu, Philipp Krähenbühl, Eli Shechtman, Alexei A. Efros

Realistic image manipulation is challenging because it requires modifying the image appearance in a user-controlled way, while preserving the realism of the result.

Image Manipulation

Preserving Color in Neural Artistic Style Transfer

7 code implementations19 Jun 2016 Leon A. Gatys, Matthias Bethge, Aaron Hertzmann, Eli Shechtman

This note presents an extension to the neural artistic style transfer algorithm (Gatys et al.).

Style Transfer

Appearance Harmonization for Single Image Shadow Removal

no code implementations21 Mar 2016 Liqian Ma, Jue Wang, Eli Shechtman, Kalyan Sunkavalli, Shi-Min Hu

In this work we propose a fully automatic shadow region harmonization approach that improves the appearance compatibility of the de-shadowed region as typically produced by previous methods.

Image Generation Image Shadow Removal +1

PatchMatch-Based Automatic Lattice Detection for Near-Regular Textures

no code implementations ICCV 2015 Siying Liu, Tian-Tsong Ng, Kalyan Sunkavalli, Minh N. Do, Eli Shechtman, Nathan Carr

In this work, we investigate the problem of automatically inferring the lattice structure of near-regular textures (NRT) in real-world images.

DeepFont: Identify Your Font from An Image

1 code implementation12 Jul 2015 Zhangyang Wang, Jianchao Yang, Hailin Jin, Eli Shechtman, Aseem Agarwala, Jonathan Brandt, Thomas S. Huang

As font is one of the core design concepts, automatic font identification and similar font suggestion from an image or photo has been on the wish list of many designers.

Domain Adaptation Font Recognition +1

Large-Scale Visual Font Recognition

no code implementations CVPR 2014 Guang Chen, Jianchao Yang, Hailin Jin, Jonathan Brandt, Eli Shechtman, Aseem Agarwala, Tony X. Han

This paper addresses the large-scale visual font recognition (VFR) problem, which aims at automatic identification of the typeface, weight, and slope of the text in an image or photo without any knowledge of content.

Font Recognition Image Categorization +1

Learning Video Saliency from Human Gaze Using Candidate Selection

no code implementations CVPR 2013 Dmitry Rudoy, Dan B. Goldman, Eli Shechtman, Lihi Zelnik-Manor

For example, the time each video frame is observed is a fraction of a second, while a still image can be viewed leisurely.

Frame Saliency Prediction

Crowdsourcing Gaze Data Collection

1 code implementation16 Apr 2012 Dmitry Rudoy, Dan B. Goldman, Eli Shechtman, Lihi Zelnik-Manor

In this work we propose a crowdsourced method for acquisition of gaze direction data from a virtually unlimited number of participants, using a robust self-reporting mechanism (see Figure 1).

Social and Information Networks Human-Computer Interaction

Cannot find the paper you are looking for? You can Submit a new open access paper.