Search Results for author: Jun-Yan Zhu

Found 45 papers, 38 papers with code

Contrastive Feature Loss for Image Prediction

1 code implementation12 Nov 2021 Alex Andonian, Taesung Park, Bryan Russell, Phillip Isola, Jun-Yan Zhu, Richard Zhang

Training supervised image synthesis models requires a critic to compare two images: the ground truth to the result.

Image Generation

Sketch Your Own GAN

2 code implementations ICCV 2021 Sheng-Yu Wang, David Bau, Jun-Yan Zhu

In particular, we change the weights of an original GAN model according to user sketches.

Image Generation

SDEdit: Image Synthesis and Editing with Stochastic Differential Equations

1 code implementation2 Aug 2021 Chenlin Meng, Yang song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, Stefano Ermon

We introduce a new image editing and synthesis framework, Stochastic Differential Editing (SDEdit), based on a recent generative model using stochastic differential equations (SDEs).

GAN inversion Image Generation

Depth-supervised NeRF: Fewer Views and Faster Training for Free

1 code implementation6 Jul 2021 Kangle Deng, Andrew Liu, Jun-Yan Zhu, Deva Ramanan

With only two training views on real-world images, DS-NeRF significantly outperforms NeRF as well as other sparse-view variants.

Structure from Motion

Editing Conditional Radiance Fields

1 code implementation ICCV 2021 Steven Liu, Xiuming Zhang, Zhoutong Zhang, Richard Zhang, Jun-Yan Zhu, Bryan Russell

In this paper, we explore enabling user editing of a category-level NeRF - also known as a conditional radiance field - trained on a shape category.

Novel View Synthesis

Ensembling with Deep Generative Views

no code implementations CVPR 2021 Lucy Chai, Jun-Yan Zhu, Eli Shechtman, Phillip Isola, Richard Zhang

Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.

Image Classification

On Buggy Resizing Libraries and Surprising Subtleties in FID Calculation

4 code implementations22 Apr 2021 Gaurav Parmar, Richard Zhang, Jun-Yan Zhu

We investigate the sensitivity of the Fr\'echet Inception Distance (FID) score to inconsistent and often incorrect implementations across different image processing libraries.

Understanding the Role of Individual Units in a Deep Neural Network

2 code implementations10 Sep 2020 David Bau, Jun-Yan Zhu, Hendrik Strobelt, Agata Lapedriza, Bolei Zhou, Antonio Torralba

Second, we use a similar analytic method to analyze a generative adversarial network (GAN) model trained to generate scenes.

Image Classification Image Generation +1

The Hessian Penalty: A Weak Prior for Unsupervised Disentanglement

1 code implementation ECCV 2020 William Peebles, John Peebles, Jun-Yan Zhu, Alexei Efros, Antonio Torralba

In this paper, we propose the Hessian Penalty, a simple regularization term that encourages the Hessian of a generative model with respect to its input to be diagonal.

Rewriting a Deep Generative Model

4 code implementations ECCV 2020 David Bau, Steven Liu, Tongzhou Wang, Jun-Yan Zhu, Antonio Torralba

To address the problem, we propose a formulation in which the desired rule is changed by manipulating a layer of a deep network as a linear associative memory.


Contrastive Learning for Unpaired Image-to-Image Translation

7 code implementations30 Jul 2020 Taesung Park, Alexei A. Efros, Richard Zhang, Jun-Yan Zhu

Furthermore, we draw negatives from within the input image itself, rather than from the rest of the dataset.

Contrastive Learning Image-to-Image Translation +1

Swapping Autoencoder for Deep Image Manipulation

3 code implementations NeurIPS 2020 Taesung Park, Jun-Yan Zhu, Oliver Wang, Jingwan Lu, Eli Shechtman, Alexei A. Efros, Richard Zhang

Deep generative models have become increasingly effective at producing realistic images from randomly sampled seeds, but using such models for controllable manipulation of existing images remains challenging.

Image Manipulation

Diverse Image Generation via Self-Conditioned GANs

2 code implementations CVPR 2020 Steven Liu, Tongzhou Wang, David Bau, Jun-Yan Zhu, Antonio Torralba

We introduce a simple but effective unsupervised method for generating realistic and diverse images.

Image Generation

Transforming and Projecting Images into Class-conditional Generative Networks

2 code implementations4 May 2020 Minyoung Huh, Richard Zhang, Jun-Yan Zhu, Sylvain Paris, Aaron Hertzmann

We present a method for projecting an input image into the space of a class-conditional generative neural network.


State of the Art on Neural Rendering

no code implementations8 Apr 2020 Ayush Tewari, Ohad Fried, Justus Thies, Vincent Sitzmann, Stephen Lombardi, Kalyan Sunkavalli, Ricardo Martin-Brualla, Tomas Simon, Jason Saragih, Matthias Nießner, Rohit Pandey, Sean Fanello, Gordon Wetzstein, Jun-Yan Zhu, Christian Theobalt, Maneesh Agrawala, Eli Shechtman, Dan B. Goldman, Michael Zollhöfer

Neural rendering is a new and rapidly emerging field that combines generative machine learning techniques with physical knowledge from computer graphics, e. g., by the integration of differentiable rendering into network training.

Image Generation Neural Rendering +1

GAN Compression: Efficient Architectures for Interactive Conditional GANs

1 code implementation CVPR 2020 Muyang Li, Ji Lin, Yaoyao Ding, Zhijian Liu, Jun-Yan Zhu, Song Han

Directly applying existing compression methods yields poor performance due to the difficulty of GAN training and the differences in generator architectures.

Image Generation Neural Architecture Search

Connecting Touch and Vision via Cross-Modal Prediction

1 code implementation CVPR 2019 Yunzhu Li, Jun-Yan Zhu, Russ Tedrake, Antonio Torralba

To connect vision and touch, we introduce new tasks of synthesizing plausible tactile signals from visual inputs as well as imagining how we interact with objects given tactile data as input.

Semantic Image Synthesis with Spatially-Adaptive Normalization

22 code implementations CVPR 2019 Taesung Park, Ming-Yu Liu, Ting-Chun Wang, Jun-Yan Zhu

Previous methods directly feed the semantic layout as input to the deep network, which is then processed through stacks of convolution, normalization, and nonlinearity layers.

Image-to-Image Translation

On the Units of GANs (Extended Abstract)

no code implementations29 Jan 2019 David Bau, Jun-Yan Zhu, Hendrik Strobelt, Bolei Zhou, Joshua B. Tenenbaum, William T. Freeman, Antonio Torralba

We quantify the causal effect of interpretable units by measuring the ability of interventions to control objects in the output.

Visual Object Networks: Image Generation with Disentangled 3D Representations

1 code implementation NeurIPS 2018 Jun-Yan Zhu, Zhoutong Zhang, Chengkai Zhang, Jiajun Wu, Antonio Torralba, Josh Tenenbaum, Bill Freeman

The VON not only generates images that are more realistic than the state-of-the-art 2D image synthesis methods but also enables many 3D operations such as changing the viewpoint of a generated image, shape and texture editing, linear interpolation in texture and shape space, and transferring appearance across different objects and viewpoints.

Image Generation

Dataset Distillation

1 code implementation27 Nov 2018 Tongzhou Wang, Jun-Yan Zhu, Antonio Torralba, Alexei A. Efros

Model distillation aims to distill the knowledge of a complex model into a simpler one.

Model distillation

Propagation Networks for Model-Based Control Under Partial Observation

1 code implementation28 Sep 2018 Yunzhu Li, Jiajun Wu, Jun-Yan Zhu, Joshua B. Tenenbaum, Antonio Torralba, Russ Tedrake

There has been an increasing interest in learning dynamics simulators for model-based control.

3D-Aware Scene Manipulation via Inverse Graphics

1 code implementation NeurIPS 2018 Shunyu Yao, Tzu Ming Harry Hsu, Jun-Yan Zhu, Jiajun Wu, Antonio Torralba, William T. Freeman, Joshua B. Tenenbaum

In this work, we propose 3D scene de-rendering networks (3D-SDN) to address the above issues by integrating disentangled representations for semantics, geometry, and appearance into a deep generative model.

Video-to-Video Synthesis

13 code implementations NeurIPS 2018 Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Guilin Liu, Andrew Tao, Jan Kautz, Bryan Catanzaro

We study the problem of video-to-video synthesis, whose goal is to learn a mapping function from an input source video (e. g., a sequence of semantic segmentation masks) to an output photorealistic video that precisely depicts the content of the source video.

Semantic Segmentation Video Prediction +1

Spatially Transformed Adversarial Examples

3 code implementations ICLR 2018 Chaowei Xiao, Jun-Yan Zhu, Bo Li, Warren He, Mingyan Liu, Dawn Song

Perturbations generated through spatial transformation could result in large $\mathcal{L}_p$ distance measures, but our extensive experiments show that such spatially transformed adversarial examples are perceptually realistic and more difficult to defend against with existing defense systems.

Light Field Video Capture Using a Learning-Based Hybrid Imaging System

1 code implementation8 May 2017 Ting-Chun Wang, Jun-Yan Zhu, Nima Khademi Kalantari, Alexei A. Efros, Ravi Ramamoorthi

Given a 3 fps light field sequence and a standard 30 fps 2D video, our system can then generate a full light field video at 30 fps.

Real-Time User-Guided Image Colorization with Learned Deep Priors

3 code implementations8 May 2017 Richard Zhang, Jun-Yan Zhu, Phillip Isola, Xinyang Geng, Angela S. Lin, Tianhe Yu, Alexei A. Efros

The system directly maps a grayscale image, along with sparse, local user "hints" to an output colorization with a Convolutional Neural Network (CNN).


Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks

173 code implementations ICCV 2017 Jun-Yan Zhu, Taesung Park, Phillip Isola, Alexei A. Efros

Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs.

 Ranked #1 on Image-to-Image Translation on photo2vangogh (Frechet Inception Distance metric)

Multimodal Unsupervised Image-To-Image Translation Style Transfer +2

Generative Visual Manipulation on the Natural Image Manifold

1 code implementation12 Sep 2016 Jun-Yan Zhu, Philipp Krähenbühl, Eli Shechtman, Alexei A. Efros

Realistic image manipulation is challenging because it requires modifying the image appearance in a user-controlled way, while preserving the realism of the result.

Image Manipulation

A 4D Light-Field Dataset and CNN Architectures for Material Recognition

no code implementations24 Aug 2016 Ting-Chun Wang, Jun-Yan Zhu, Ebi Hiroaki, Manmohan Chandraker, Alexei A. Efros, Ravi Ramamoorthi

We introduce a new light-field dataset of materials, and take advantage of the recent success of deep learning to perform material recognition on the 4D light-field.

Image Classification Material Recognition +2

MILCut: A Sweeping Line Multiple Instance Learning Paradigm for Interactive Image Segmentation

no code implementations CVPR 2014 Jiajun Wu, Yibiao Zhao, Jun-Yan Zhu, Siwei Luo, Zhuowen Tu

Interactive segmentation, in which a user provides a bounding box to an object of interest for image segmentation, has been applied to a variety of applications in image editing, crowdsourcing, computer vision, and medical imaging.

Interactive Segmentation Multiple Instance Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.