Search Results for author: Huiwen Chang

Found 14 papers, 6 papers with code

MaskGIT: Masked Generative Image Transformer

2 code implementations CVPR 2022 Huiwen Chang, Han Zhang, Lu Jiang, Ce Liu, William T. Freeman

At inference time, the model begins with generating all tokens of an image simultaneously, and then refines the image iteratively conditioned on the previous generation.

Image Manipulation Image Outpainting +1

BLT: Bidirectional Layout Transformer for Controllable Layout Generation

no code implementations9 Dec 2021 Xiang Kong, Lu Jiang, Huiwen Chang, Han Zhang, Yuan Hao, Haifeng Gong, Irfan Essa

During inference, BLT first generates a draft layout from the input and then iteratively refines it into a high-quality layout by masking out low-confident attributes.

Pyramid Adversarial Training Improves ViT Performance

no code implementations CVPR 2022 Charles Herrmann, Kyle Sargent, Lu Jiang, Ramin Zabih, Huiwen Chang, Ce Liu, Dilip Krishnan, Deqing Sun

In this work, we present Pyramid Adversarial Training, a simple and effective technique to improve ViT's overall performance.

Ranked #3 on Domain Generalization on ImageNet-C (using extra training data)

Adversarial Attack Data Augmentation +2

Palette: Image-to-Image Diffusion Models

2 code implementations10 Nov 2021 Chitwan Saharia, William Chan, Huiwen Chang, Chris A. Lee, Jonathan Ho, Tim Salimans, David J. Fleet, Mohammad Norouzi

We expect this standardized evaluation protocol to play a role in advancing image-to-image translation research.

Colorization Denoising +5

SLIDE: Single Image 3D Photography with Soft Layering and Depth-aware Inpainting

no code implementations ICCV 2021 Varun Jampani, Huiwen Chang, Kyle Sargent, Abhishek Kar, Richard Tucker, Michael Krainin, Dominik Kaeser, William T. Freeman, David Salesin, Brian Curless, Ce Liu

We present SLIDE, a modular and unified system for single image 3D photography that uses a simple yet effective soft layering strategy to better preserve appearance details in novel views.

Image Matting

ViTGAN: Training GANs with Vision Transformers

3 code implementations ICLR 2022 Kwonjoon Lee, Huiwen Chang, Lu Jiang, Han Zhang, Zhuowen Tu, Ce Liu

Recently, Vision Transformers (ViTs) have shown competitive performance on image recognition while requiring less vision-specific inductive biases.

Image Generation

OCONet: Image Extrapolation by Object Completion

no code implementations CVPR 2021 Richard Strong Bowen, Huiwen Chang, Charles Herrmann, Piotr Teterwak, Ce Liu, Ramin Zabih

Existing methods struggle to extrapolate images with salient objects in the foreground or are limited to very specific objects such as humans, but tend to work well on indoor/outdoor scenes.

AutoFlow: Learning a Better Training Set for Optical Flow

no code implementations CVPR 2021 Deqing Sun, Daniel Vlasic, Charles Herrmann, Varun Jampani, Michael Krainin, Huiwen Chang, Ramin Zabih, William T. Freeman, Ce Liu

Synthetic datasets play a critical role in pre-training CNN models for optical flow, but they are painstaking to generate and hard to adapt to new applications.

Optical Flow Estimation

Zoom-to-Inpaint: Image Inpainting with High-Frequency Details

1 code implementation17 Dec 2020 Soo Ye Kim, Kfir Aberman, Nori Kanazawa, Rahul Garg, Neal Wadhwa, Huiwen Chang, Nikhil Karnad, Munchurl Kim, Orly Liba

Although deep learning has enabled a huge leap forward in image inpainting, current methods are often unable to synthesize realistic high-frequency details.

Image Inpainting Super-Resolution

Distortion Agnostic Deep Watermarking

no code implementations CVPR 2020 Xiyang Luo, Ruohan Zhan, Huiwen Chang, Feng Yang, Peyman Milanfar

Watermarking is the process of embedding information into an image that can survive under distortions, while requiring the encoded image to have little or no perceptual difference from the original image.

SwapNet: Garment Transfer in Single View Images

1 code implementation ECCV 2018 Amit Raj, Patsorn Sangkloy, Huiwen Chang, Jingwan Lu, Duygu Ceylan, James Hays

Garment transfer is a challenging task that requires (i) disentangling the features of the clothing from the body pose and shape and (ii) realistic synthesis of the garment texture on the new body.

 Ranked #1 on Virtual Try-on on FashionIQ (using extra training data)

Virtual Try-on

PairedCycleGAN: Asymmetric Style Transfer for Applying and Removing Makeup

no code implementations CVPR 2018 Huiwen Chang, Jingwan Lu, Fisher Yu, Adam Finkelstein

This paper introduces an automatic method for editing a portrait photo so that the subject appears to be wearing makeup in the style of another person in a reference photo.

Style Transfer

Cannot find the paper you are looking for? You can Submit a new open access paper.