Search Results for author: Jingwan Lu

Found 27 papers, 12 papers with code

Towards Enhanced Controllability of Diffusion Models

no code implementations28 Feb 2023 Wonwoong Cho, Hareesh Ravi, Midhun Harikumar, Vinh Khuc, Krishna Kumar Singh, Jingwan Lu, David I. Inouye, Ajinkya Kale

We rely on the inductive bias of the progressive denoising process of diffusion models to encode pose/layout information in the spatial structure mask and semantic/style information in the style code.

Denoising Image Manipulation +3

Modulating Pretrained Diffusion Models for Multimodal Image Synthesis

no code implementations24 Feb 2023 Cusuh Ham, James Hays, Jingwan Lu, Krishna Kumar Singh, Zhifei Zhang, Tobias Hinz

We show that MCM enables user control over the spatial layout of the image and leads to increased control over the image generation process.

Image Generation Semantic Segmentation

Zero-shot Image-to-Image Translation

1 code implementation6 Feb 2023 Gaurav Parmar, Krishna Kumar Singh, Richard Zhang, Yijun Li, Jingwan Lu, Jun-Yan Zhu

However, it is still challenging to directly apply these models for editing real images for two reasons.

Image-to-Image Translation Translation

UMFuse: Unified Multi View Fusion for Human Editing applications

no code implementations17 Nov 2022 Rishabh Jain, Mayur Hemani, Duygu Ceylan, Krishna Kumar Singh, Jingwan Lu, Mausooom Sarkar, Balaji Krishnamurthy

Numerous pose-guided human editing methods have been explored by the vision community due to their extensive practical applications.

Image Generation Retrieval +1

VGFlow: Visibility guided Flow Network for Human Reposing

no code implementations13 Nov 2022 Rishabh Jain, Krishna Kumar Singh, Mayur Hemani, Jingwan Lu, Mausooom Sarkar, Duygu Ceylan, Balaji Krishnamurthy

The task of human reposing involves generating a realistic image of a person standing in an arbitrary conceivable pose.


Spatially-Adaptive Multilayer Selection for GAN Inversion and Editing

1 code implementation CVPR 2022 Gaurav Parmar, Yijun Li, Jingwan Lu, Richard Zhang, Jun-Yan Zhu, Krishna Kumar Singh

We propose a new method to invert and edit such complex images in the latent space of GANs, such as StyleGAN2.

Learning Motion-Dependent Appearance for High-Fidelity Rendering of Dynamic Humans from a Single Camera

no code implementations CVPR 2022 Jae Shin Yoon, Duygu Ceylan, Tuanfeng Y. Wang, Jingwan Lu, Jimei Yang, Zhixin Shu, Hyun Soo Park

Appearance of dressed humans undergoes a complex geometric transformation induced not only by the static pose but also by its dynamics, i. e., there exists a number of cloth geometric configurations given a pose depending on the way it has moved.

CM-GAN: Image Inpainting with Cascaded Modulation GAN and Object-Aware Training

1 code implementation22 Mar 2022 Haitian Zheng, Zhe Lin, Jingwan Lu, Scott Cohen, Eli Shechtman, Connelly Barnes, Jianming Zhang, Ning Xu, Sohrab Amirghodsi, Jiebo Luo

We propose cascaded modulation GAN (CM-GAN), a new network design consisting of an encoder with Fourier convolution blocks that extract multi-scale feature representations from the input image with holes and a dual-stream decoder with a novel cascaded global-spatial modulation block at each scale level.

Image Inpainting

InsetGAN for Full-Body Image Generation

no code implementations CVPR 2022 Anna Frühstück, Krishna Kumar Singh, Eli Shechtman, Niloy J. Mitra, Peter Wonka, Jingwan Lu

Instead of modeling this complex domain with a single GAN, we propose a novel method to combine multiple pretrained GANs, where one GAN generates a global canvas (e. g., human body) and a set of specialized GANs, or insets, focus on different parts (e. g., faces, shoes) that can be seamlessly inserted onto the global canvas.

Image Generation

Few-shot Image Generation via Cross-domain Correspondence

1 code implementation CVPR 2021 Utkarsh Ojha, Yijun Li, Jingwan Lu, Alexei A. Efros, Yong Jae Lee, Eli Shechtman, Richard Zhang

Training generative models, such as GANs, on a target domain containing limited examples (e. g., 10) can easily result in overfitting.

Image Generation

IMAGINE: Image Synthesis by Image-Guided Model Inversion

no code implementations CVPR 2021 Pei Wang, Yijun Li, Krishna Kumar Singh, Jingwan Lu, Nuno Vasconcelos

We introduce an inversion based method, denoted as IMAge-Guided model INvErsion (IMAGINE), to generate high-quality and diverse images from only a single training sample.

Image Generation Specificity

Semantic Layout Manipulation with High-Resolution Sparse Attention

1 code implementation14 Dec 2020 Haitian Zheng, Zhe Lin, Jingwan Lu, Scott Cohen, Jianming Zhang, Ning Xu, Jiebo Luo

A core problem of this task is how to transfer visual details from the input images to the new semantic layout while making the resulting image visually realistic.

Few-shot Image Generation with Elastic Weight Consolidation

no code implementations NeurIPS 2020 Yijun Li, Richard Zhang, Jingwan Lu, Eli Shechtman

Few-shot image generation seeks to generate more data of a given domain, with only few available training examples.

Image Generation

Modeling Artistic Workflows for Image Generation and Editing

1 code implementation ECCV 2020 Hung-Yu Tseng, Matthew Fisher, Jingwan Lu, Yijun Li, Vladimir Kim, Ming-Hsuan Yang

People often create art by following an artistic workflow involving multiple stages that inform the overall design.

Image Generation

Swapping Autoencoder for Deep Image Manipulation

4 code implementations NeurIPS 2020 Taesung Park, Jun-Yan Zhu, Oliver Wang, Jingwan Lu, Eli Shechtman, Alexei A. Efros, Richard Zhang

Deep generative models have become increasingly effective at producing realistic images from randomly sampled seeds, but using such models for controllable manipulation of existing images remains challenging.

Image Manipulation

Generative Tweening: Long-term Inbetweening of 3D Human Motions

no code implementations18 May 2020 Yi Zhou, Jingwan Lu, Connelly Barnes, Jimei Yang, Sitao Xiang, Hao Li

We introduce a biomechanically constrained generative adversarial network that performs long-term inbetweening of human motions, conditioned on keyframe constraints.

AutoToon: Automatic Geometric Warping for Face Cartoon Generation

1 code implementation6 Apr 2020 Julia Gong, Yannick Hold-Geoffroy, Jingwan Lu

Caricature, a type of exaggerated artistic portrait, amplifies the distinctive, yet nuanced traits of human faces.


On the Continuity of Rotation Representations in Neural Networks

5 code implementations CVPR 2019 Yi Zhou, Connelly Barnes, Jingwan Lu, Jimei Yang, Hao Li

Thus, widely used representations such as quaternions and Euler angles are discontinuous and difficult for neural networks to learn.

SwapNet: Garment Transfer in Single View Images

1 code implementation ECCV 2018 Amit Raj, Patsorn Sangkloy, Huiwen Chang, Jingwan Lu, Duygu Ceylan, James Hays

Garment transfer is a challenging task that requires (i) disentangling the features of the clothing from the body pose and shape and (ii) realistic synthesis of the garment texture on the new body.

 Ranked #1 on Virtual Try-on on FashionIQ (using extra training data)

Virtual Try-on

PairedCycleGAN: Asymmetric Style Transfer for Applying and Removing Makeup

no code implementations CVPR 2018 Huiwen Chang, Jingwan Lu, Fisher Yu, Adam Finkelstein

This paper introduces an automatic method for editing a portrait photo so that the subject appears to be wearing makeup in the style of another person in a reference photo.

Style Transfer

Scribbler: Controlling Deep Image Synthesis with Sketch and Color

1 code implementation CVPR 2017 Patsorn Sangkloy, Jingwan Lu, Chen Fang, Fisher Yu, James Hays

In this paper, we propose a deep adversarial image synthesis architecture that is conditioned on sketched boundaries and sparse color strokes to generate realistic cars, bedrooms, or faces.

Colorization Image Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.