1 code implementation • CVPR 2023 • Sumith Kulal, Tim Brooks, Alex Aiken, Jiajun Wu, Jimei Yang, Jingwan Lu, Alexei A. Efros, Krishna Kumar Singh
Given a scene image with a marked region and an image of a person, we insert the person into the scene while respecting the scene affordances.
no code implementations • 28 Feb 2023 • Wonwoong Cho, Hareesh Ravi, Midhun Harikumar, Vinh Khuc, Krishna Kumar Singh, Jingwan Lu, David I. Inouye, Ajinkya Kale
Second, we propose timestep-dependent weight scheduling for content and style features to further improve the performance.
no code implementations • 24 Feb 2023 • Cusuh Ham, James Hays, Jingwan Lu, Krishna Kumar Singh, Zhifei Zhang, Tobias Hinz
We show that MCM enables user control over the spatial layout of the image and leads to increased control over the image generation process.
2 code implementations • 6 Feb 2023 • Gaurav Parmar, Krishna Kumar Singh, Richard Zhang, Yijun Li, Jingwan Lu, Jun-Yan Zhu
However, it is still challenging to directly apply these models for editing real images for two reasons.
Ranked #13 on Text-based Image Editing on PIE-Bench
no code implementations • 13 Dec 2022 • Haitian Zheng, Zhe Lin, Jingwan Lu, Scott Cohen, Eli Shechtman, Connelly Barnes, Jianming Zhang, Qing Liu, Yuqian Zhou, Sohrab Amirghodsi, Jiebo Luo
Moreover, the object-level discriminators take aligned instances as inputs to enforce the realism of individual objects.
no code implementations • ICCV 2023 • Rishabh Jain, Mayur Hemani, Duygu Ceylan, Krishna Kumar Singh, Jingwan Lu, Mausoom Sarkar, Balaji Krishnamurthy
Numerous pose-guided human editing methods have been explored by the vision community due to their extensive practical applications.
no code implementations • CVPR 2023 • Rishabh Jain, Krishna Kumar Singh, Mayur Hemani, Jingwan Lu, Mausoom Sarkar, Duygu Ceylan, Balaji Krishnamurthy
The task of human reposing involves generating a realistic image of a person standing in an arbitrary conceivable pose.
no code implementations • 4 Nov 2022 • Yuheng Li, Yijun Li, Jingwan Lu, Eli Shechtman, Yong Jae Lee, Krishna Kumar Singh
We introduce a new method for diverse foreground generation with explicit control over various factors.
1 code implementation • CVPR 2022 • Gaurav Parmar, Yijun Li, Jingwan Lu, Richard Zhang, Jun-Yan Zhu, Krishna Kumar Singh
We propose a new method to invert and edit such complex images in the latent space of GANs, such as StyleGAN2.
no code implementations • CVPR 2022 • Jae Shin Yoon, Duygu Ceylan, Tuanfeng Y. Wang, Jingwan Lu, Jimei Yang, Zhixin Shu, Hyun Soo Park
Appearance of dressed humans undergoes a complex geometric transformation induced not only by the static pose but also by its dynamics, i. e., there exists a number of cloth geometric configurations given a pose depending on the way it has moved.
1 code implementation • 22 Mar 2022 • Haitian Zheng, Zhe Lin, Jingwan Lu, Scott Cohen, Eli Shechtman, Connelly Barnes, Jianming Zhang, Ning Xu, Sohrab Amirghodsi, Jiebo Luo
We propose cascaded modulation GAN (CM-GAN), a new network design consisting of an encoder with Fourier convolution blocks that extract multi-scale feature representations from the input image with holes and a dual-stream decoder with a novel cascaded global-spatial modulation block at each scale level.
Ranked #1 on Image Inpainting on Places2
2 code implementations • CVPR 2022 • Anna Frühstück, Krishna Kumar Singh, Eli Shechtman, Niloy J. Mitra, Peter Wonka, Jingwan Lu
Instead of modeling this complex domain with a single GAN, we propose a novel method to combine multiple pretrained GANs, where one GAN generates a global canvas (e. g., human body) and a set of specialized GANs, or insets, focus on different parts (e. g., faces, shoes) that can be seamlessly inserted onto the global canvas.
no code implementations • ICCV 2021 • Yuheng Li, Yijun Li, Jingwan Lu, Eli Shechtman, Yong Jae Lee, Krishna Kumar Singh
We propose a new approach for high resolution semantic image synthesis.
no code implementations • 13 Sep 2021 • Badour AlBahar, Jingwan Lu, Jimei Yang, Zhixin Shu, Eli Shechtman, Jia-Bin Huang
We present an algorithm for re-rendering a person from a single image under arbitrary poses.
no code implementations • CVPR 2021 • Pei Wang, Yijun Li, Krishna Kumar Singh, Jingwan Lu, Nuno Vasconcelos
We introduce an inversion based method, denoted as IMAge-Guided model INvErsion (IMAGINE), to generate high-quality and diverse images from only a single training sample.
2 code implementations • CVPR 2021 • Utkarsh Ojha, Yijun Li, Jingwan Lu, Alexei A. Efros, Yong Jae Lee, Eli Shechtman, Richard Zhang
Training generative models, such as GANs, on a target domain containing limited examples (e. g., 10) can easily result in overfitting.
Ranked #3 on 10-shot image generation on Babies
1 code implementation • 14 Dec 2020 • Haitian Zheng, Zhe Lin, Jingwan Lu, Scott Cohen, Jianming Zhang, Ning Xu, Jiebo Luo
A core problem of this task is how to transfer visual details from the input images to the new semantic layout while making the resulting image visually realistic.
no code implementations • NeurIPS 2020 • Yijun Li, Richard Zhang, Jingwan Lu, Eli Shechtman
Few-shot image generation seeks to generate more data of a given domain, with only few available training examples.
Ranked #4 on 10-shot image generation on Babies
no code implementations • ECCV 2020 • Liqian Ma, Zhe Lin, Connelly Barnes, Alexei A. Efros, Jingwan Lu
Due to the ubiquity of smartphones, it is popular to take photos of one's self, or "selfies."
1 code implementation • ECCV 2020 • Hung-Yu Tseng, Matthew Fisher, Jingwan Lu, Yijun Li, Vladimir Kim, Ming-Hsuan Yang
People often create art by following an artistic workflow involving multiple stages that inform the overall design.
4 code implementations • NeurIPS 2020 • Taesung Park, Jun-Yan Zhu, Oliver Wang, Jingwan Lu, Eli Shechtman, Alexei A. Efros, Richard Zhang
Deep generative models have become increasingly effective at producing realistic images from randomly sampled seeds, but using such models for controllable manipulation of existing images remains challenging.
no code implementations • 18 May 2020 • Yi Zhou, Jingwan Lu, Connelly Barnes, Jimei Yang, Sitao Xiang, Hao Li
We introduce a biomechanically constrained generative adversarial network that performs long-term inbetweening of human motions, conditioned on keyframe constraints.
1 code implementation • 6 Apr 2020 • Julia Gong, Yannick Hold-Geoffroy, Jingwan Lu
Caricature, a type of exaggerated artistic portrait, amplifies the distinctive, yet nuanced traits of human faces.
5 code implementations • CVPR 2019 • Yi Zhou, Connelly Barnes, Jingwan Lu, Jimei Yang, Hao Li
Thus, widely used representations such as quaternions and Euler angles are discontinuous and difficult for neural networks to learn.
1 code implementation • ECCV 2018 • Amit Raj, Patsorn Sangkloy, Huiwen Chang, Jingwan Lu, Duygu Ceylan, James Hays
Garment transfer is a challenging task that requires (i) disentangling the features of the clothing from the body pose and shape and (ii) realistic synthesis of the garment texture on the new body.
Ranked #1 on Virtual Try-on on FashionIQ (using extra training data)
no code implementations • CVPR 2018 • Huiwen Chang, Jingwan Lu, Fisher Yu, Adam Finkelstein
This paper introduces an automatic method for editing a portrait photo so that the subject appears to be wearing makeup in the style of another person in a reference photo.
2 code implementations • CVPR 2018 • Wenqi Xian, Patsorn Sangkloy, Varun Agrawal, Amit Raj, Jingwan Lu, Chen Fang, Fisher Yu, James Hays
In this paper, we investigate deep image synthesis guided by sketch, color, and texture.
Ranked #2 on Image Reconstruction on Edge-to-Shoes
1 code implementation • CVPR 2017 • Patsorn Sangkloy, Jingwan Lu, Chen Fang, Fisher Yu, James Hays
In this paper, we propose a deep adversarial image synthesis architecture that is conditioned on sketched boundaries and sparse color strokes to generate realistic cars, bedrooms, or faces.