Search Results for author: Jingwan Lu

Found 28 papers, 14 papers with code

Putting People in Their Place: Affordance-Aware Human Insertion into Scenes

1 code implementation CVPR 2023 Sumith Kulal, Tim Brooks, Alex Aiken, Jiajun Wu, Jimei Yang, Jingwan Lu, Alexei A. Efros, Krishna Kumar Singh

Given a scene image with a marked region and an image of a person, we insert the person into the scene while respecting the scene affordances.

Modulating Pretrained Diffusion Models for Multimodal Image Synthesis

no code implementations24 Feb 2023 Cusuh Ham, James Hays, Jingwan Lu, Krishna Kumar Singh, Zhifei Zhang, Tobias Hinz

We show that MCM enables user control over the spatial layout of the image and leads to increased control over the image generation process.

Image Generation Semantic Segmentation

Spatially-Adaptive Multilayer Selection for GAN Inversion and Editing

1 code implementation CVPR 2022 Gaurav Parmar, Yijun Li, Jingwan Lu, Richard Zhang, Jun-Yan Zhu, Krishna Kumar Singh

We propose a new method to invert and edit such complex images in the latent space of GANs, such as StyleGAN2.

Learning Motion-Dependent Appearance for High-Fidelity Rendering of Dynamic Humans from a Single Camera

no code implementations CVPR 2022 Jae Shin Yoon, Duygu Ceylan, Tuanfeng Y. Wang, Jingwan Lu, Jimei Yang, Zhixin Shu, Hyun Soo Park

Appearance of dressed humans undergoes a complex geometric transformation induced not only by the static pose but also by its dynamics, i. e., there exists a number of cloth geometric configurations given a pose depending on the way it has moved.

Decoder

CM-GAN: Image Inpainting with Cascaded Modulation GAN and Object-Aware Training

1 code implementation22 Mar 2022 Haitian Zheng, Zhe Lin, Jingwan Lu, Scott Cohen, Eli Shechtman, Connelly Barnes, Jianming Zhang, Ning Xu, Sohrab Amirghodsi, Jiebo Luo

We propose cascaded modulation GAN (CM-GAN), a new network design consisting of an encoder with Fourier convolution blocks that extract multi-scale feature representations from the input image with holes and a dual-stream decoder with a novel cascaded global-spatial modulation block at each scale level.

Decoder Image Inpainting

InsetGAN for Full-Body Image Generation

2 code implementations CVPR 2022 Anna Frühstück, Krishna Kumar Singh, Eli Shechtman, Niloy J. Mitra, Peter Wonka, Jingwan Lu

Instead of modeling this complex domain with a single GAN, we propose a novel method to combine multiple pretrained GANs, where one GAN generates a global canvas (e. g., human body) and a set of specialized GANs, or insets, focus on different parts (e. g., faces, shoes) that can be seamlessly inserted onto the global canvas.

Diversity Image Generation

IMAGINE: Image Synthesis by Image-Guided Model Inversion

no code implementations CVPR 2021 Pei Wang, Yijun Li, Krishna Kumar Singh, Jingwan Lu, Nuno Vasconcelos

We introduce an inversion based method, denoted as IMAge-Guided model INvErsion (IMAGINE), to generate high-quality and diverse images from only a single training sample.

Image Generation Specificity

Semantic Layout Manipulation with High-Resolution Sparse Attention

1 code implementation14 Dec 2020 Haitian Zheng, Zhe Lin, Jingwan Lu, Scott Cohen, Jianming Zhang, Ning Xu, Jiebo Luo

A core problem of this task is how to transfer visual details from the input images to the new semantic layout while making the resulting image visually realistic.

Decoder Vocal Bursts Intensity Prediction

Modeling Artistic Workflows for Image Generation and Editing

1 code implementation ECCV 2020 Hung-Yu Tseng, Matthew Fisher, Jingwan Lu, Yijun Li, Vladimir Kim, Ming-Hsuan Yang

People often create art by following an artistic workflow involving multiple stages that inform the overall design.

Image Generation

Swapping Autoencoder for Deep Image Manipulation

4 code implementations NeurIPS 2020 Taesung Park, Jun-Yan Zhu, Oliver Wang, Jingwan Lu, Eli Shechtman, Alexei A. Efros, Richard Zhang

Deep generative models have become increasingly effective at producing realistic images from randomly sampled seeds, but using such models for controllable manipulation of existing images remains challenging.

Image Manipulation

Generative Tweening: Long-term Inbetweening of 3D Human Motions

no code implementations18 May 2020 Yi Zhou, Jingwan Lu, Connelly Barnes, Jimei Yang, Sitao Xiang, Hao Li

We introduce a biomechanically constrained generative adversarial network that performs long-term inbetweening of human motions, conditioned on keyframe constraints.

Generative Adversarial Network

AutoToon: Automatic Geometric Warping for Face Cartoon Generation

1 code implementation6 Apr 2020 Julia Gong, Yannick Hold-Geoffroy, Jingwan Lu

Caricature, a type of exaggerated artistic portrait, amplifies the distinctive, yet nuanced traits of human faces.

Caricature

On the Continuity of Rotation Representations in Neural Networks

5 code implementations CVPR 2019 Yi Zhou, Connelly Barnes, Jingwan Lu, Jimei Yang, Hao Li

Thus, widely used representations such as quaternions and Euler angles are discontinuous and difficult for neural networks to learn.

SwapNet: Garment Transfer in Single View Images

1 code implementation ECCV 2018 Amit Raj, Patsorn Sangkloy, Huiwen Chang, Jingwan Lu, Duygu Ceylan, James Hays

Garment transfer is a challenging task that requires (i) disentangling the features of the clothing from the body pose and shape and (ii) realistic synthesis of the garment texture on the new body.

 Ranked #1 on Virtual Try-on on FashionIQ (using extra training data)

Virtual Try-on

PairedCycleGAN: Asymmetric Style Transfer for Applying and Removing Makeup

no code implementations CVPR 2018 Huiwen Chang, Jingwan Lu, Fisher Yu, Adam Finkelstein

This paper introduces an automatic method for editing a portrait photo so that the subject appears to be wearing makeup in the style of another person in a reference photo.

Style Transfer

Scribbler: Controlling Deep Image Synthesis with Sketch and Color

1 code implementation CVPR 2017 Patsorn Sangkloy, Jingwan Lu, Chen Fang, Fisher Yu, James Hays

In this paper, we propose a deep adversarial image synthesis architecture that is conditioned on sketched boundaries and sparse color strokes to generate realistic cars, bedrooms, or faces.

Colorization Image Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.